summaryrefslogtreecommitdiff
path: root/mm/slub.c
AgeCommit message (Expand)AuthorFilesLines
2011-11-28slub: add missed accountingShaohua Li1-2/+5
2011-11-28Merge branch 'slab/urgent' into slab/nextPekka Enberg1-16/+26
2011-11-24slub: avoid potential NULL dereference or corruptionEric Dumazet1-10/+11
2011-11-24slub: use irqsafe_cpu_cmpxchg for put_cpu_partialChristoph Lameter1-1/+1
2011-11-16slub: add taint flag outputting to debug pathsDave Jones1-1/+1
2011-11-15slub: move discard_slab out of node lockShaohua Li1-4/+12
2011-11-15slub: use correct parameter to add a page to partial list tailShaohua Li1-1/+2
2011-11-01lib/string.c: introduce memchr_inv()Akinobu Mita1-45/+2
2011-10-26Merge branches 'slab/next' and 'slub/partial' into slab/for-linusPekka Enberg1-166/+392
2011-09-28slub: Discard slab page when node partial > minimum partial numberAlex Shi1-1/+1
2011-09-28slub: correct comments error for per cpu partialAlex Shi1-1/+1
2011-09-27mm: restrict access to slab files under procfs and sysfsVasiliy Kulikov1-3/+4
2011-09-19Merge branch 'slab/urgent' into slab/nextPekka Enberg1-10/+12
2011-09-13slub: Code optimization in get_partial_node()Alex,Shi1-4/+2
2011-08-27slub: explicitly document position of inserting slab to partial listShaohua Li1-6/+6
2011-08-27slub: add slab with one free object to partial list tailShaohua Li1-1/+1
2011-08-19slub: per cpu cache for partial pagesChristoph Lameter1-47/+292
2011-08-19slub: return object pointer from get_partial() / new_slab().Christoph Lameter1-60/+73
2011-08-19slub: pass kmem_cache_cpu pointer to get_partial()Christoph Lameter1-15/+15
2011-08-19slub: Prepare inuse field in new_slab()Christoph Lameter1-3/+2
2011-08-19slub: Remove useless statements in __slab_allocChristoph Lameter1-4/+0
2011-08-19slub: free slabs without holding locksChristoph Lameter1-13/+13
2011-08-09slub: Fix partial count comparison confusionChristoph Lameter1-1/+1
2011-08-09slub: fix check_bytes() for slub debuggingAkinobu Mita1-1/+1
2011-08-09slub: Fix full list corruption if debugging is onChristoph Lameter1-2/+4
2011-07-31slub: use print_hex_dumpSebastian Andrzej Siewior1-35/+9
2011-07-30Merge branch 'slub/lockless' of git://git.kernel.org/pub/scm/linux/kernel/git...Linus Torvalds1-252/+512
2011-07-26Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jik...Linus Torvalds1-1/+1
2011-07-25slub: When allocating a new slab also prep the first objectChristoph Lameter1-0/+3
2011-07-22Merge branch 'slab-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds1-2/+103
2011-07-21treewide: fix potentially dangerous trailing ';' in #defined values/expressionsPhil Carmody1-1/+1
2011-07-18slub: disable interrupts in cmpxchg_double_slab when falling back to pagelockChristoph Lameter1-4/+45
2011-07-07SLUB: Fix missing <linux/stacktrace.h> includePekka Enberg1-0/+1
2011-07-07slub: reduce overhead of slub_debugMarcin Slusarz1-2/+34
2011-07-07slub: Add method to verify memory is not freedBen Greear1-0/+36
2011-07-07slub: Enable backtrace for create/delete pointsBen Greear1-0/+32
2011-07-02slub: Not necessary to check for empty slab on load_freelistChristoph Lameter1-3/+2
2011-07-02slub: fast release on full slabChristoph Lameter1-2/+19
2011-07-02slub: Add statistics for the case that the current slab does not match the nodeChristoph Lameter1-0/+3
2011-07-02slub: Get rid of the another_slab labelChristoph Lameter1-6/+5
2011-07-02slub: Avoid disabling interrupts in free slowpathChristoph Lameter1-11/+5
2011-07-02slub: Disable interrupts in free_debug processingChristoph Lameter1-4/+10
2011-07-02slub: Invert locking and avoid slab lockChristoph Lameter1-77/+52
2011-07-02slub: Rework allocator fastpathsChristoph Lameter1-129/+280
2011-07-02slub: Pass kmem_cache struct to lock and freeze slabChristoph Lameter1-7/+8
2011-07-02slub: explicit list_lock takingChristoph Lameter1-40/+49
2011-07-02slub: Add cmpxchg_double_slab()Christoph Lameter1-5/+60
2011-07-02slub: Move page->frozen handling near where the page->freelist handling occursChristoph Lameter1-2/+6
2011-07-02slub: Do not use frozen page flag but a bit in the page countersChristoph Lameter1-6/+6
2011-07-02slub: Push irq disable into allocate_slab()Christoph Lameter1-10/+13