summaryrefslogtreecommitdiff
path: root/drivers/mtd/ubi/wl.c
AgeCommit message (Collapse)AuthorFilesLines
2023-10-29ubi: fastmap: Get wl PEB even ec beyonds the 'max' if free PEBs are run outZhihao Cheng1-6/+10
This is the part 2 to fix cyclically reusing single fastmap data PEBs. Consider one situation, if there are four free PEBs for fm_anchor, pool, wl_pool and fastmap data PEB with erase counter 100, 100, 100, 5096 (ubi->beb_rsvd_pebs is 0). PEB with erase counter 5096 is always picked for fastmap data according to the realization of find_wl_entry(), since fastmap data PEB is not scheduled for wl, finally there are two PEBs (fm data) with great erase counter than other PEBS. Get wl PEB even its erase counter exceeds the 'max' in find_wl_entry() when free PEBs are run out after filling pools and fm data. Then the PEB with biggest erase conter is taken as wl PEB, it can be scheduled for wl. Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-28ubi: fastmap: may_reserve_for_fm: Don't reserve PEB if fm_anchor existsZhihao Cheng1-3/+6
This is the part 1 to fix cyclically reusing single fastmap data PEBs. After running fsstress on UBIFS for a while, UBI (16384 blocks, fastmap takes 2 blocks) has an erase block(PEB: 8031) with big erase counter greater than any other pebs: ========================================================= from to count min avg max --------------------------------------------------------- 0 .. 9: 0 0 0 0 10 .. 99: 532 84 92 99 100 .. 999: 15787 100 147 229 1000 .. 9999: 64 4699 4765 4826 10000 .. 99999: 0 0 0 0 100000 .. inf: 1 272935 272935 272935 --------------------------------------------------------- Total : 16384 84 180 272935 Not like fm_anchor, there is no candidate PEBs for fastmap data area, so old fastmap data pebs will be reused after all free pebs are filled into pool/wl_pool: ubi_update_fastmap for (i = 1; i < new_fm->used_blocks; i++) erase_block(ubi, old_fm->e[i]->pnum) new_fm->e[i] = old_fm->e[i] According to wear leveling algorithm, UBI selects one small erase counter PEB from ubi->used and one big erase counter PEB from wl_pool, the reused fastmap data PEB is not in these trees. UBI won't schedule this PEB for wl even it is in ubi->used because wl algorithm expects small erase counter for used PEB. Don't reserve PEB for fastmap in may_reserve_for_fm() if fm_anchor already exists. Otherwise, when UBI is running out of free PEBs, the only one free PEB (pnum < 64) will be skipped and fastmap data will be written on the same old PEB. Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-28ubi: fastmap: Wait until there are enough free PEBs before filling poolsZhihao Cheng1-4/+10
Wait until there are enough free PEBs before filling pool/wl_pool, sometimes erase_worker is not scheduled in time, which causes two situations: A. There are few PEBs filled in pool, which makes ubi_update_fastmap is frequently called and leads first 64 PEBs are erased more times than other PEBs. So waiting free PEBs before filling pool reduces fastmap updating frequency and prolongs flash service life. B. In situation that space is nearly running out, ubi_refill_pools() cannot make sure pool and wl_pool are filled with free PEBs, caused by the delay of erase_worker. After this patch applied, there must exist free PEBs in pool after one call of ubi_update_fastmap. Besides, this patch is a preparetion for fixing large erase counter in fastmap data block and fixing lapsed wear leveling for first 64 PEBs. Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-28ubi: Replace erase_block() with sync_erase()Zhihao Cheng1-5/+4
Since erase_block() has same logic with sync_erase(), just replace it with sync_erase(), also rename 'sync_erase()' to 'ubi_sync_erase()'. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-03-04ubi: Fix deadlock caused by recursively holding work_semZhaoLong Wang1-2/+2
During the processing of the bgt, if the sync_erase() return -EBUSY or some other error code in __erase_worker(),schedule_erase() called again lead to the down_read(ubi->work_sem) hold twice and may get block by down_write(ubi->work_sem) in ubi_update_fastmap(), which cause deadlock. ubi bgt other task do_work down_read(&ubi->work_sem) ubi_update_fastmap erase_worker # Blocked by down_read __erase_worker down_write(&ubi->work_sem) schedule_erase schedule_ubi_work down_read(&ubi->work_sem) Fix this by changing input parameter @nested of the schedule_erase() to 'true' to avoid recursively acquiring the down_read(&ubi->work_sem). Also, fix the incorrect comment about @nested parameter of the schedule_erase() because when down_write(ubi->work_sem) is held, the @nested is also need be true. Link: https://bugzilla.kernel.org/show_bug.cgi?id=217093 Fixes: 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()") Signed-off-by: ZhaoLong Wang <wangzhaolong1@huawei.com> Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-06ubi: use correct names in function kernel-doc commentsRandy Dunlap1-1/+1
Fix kernel-doc warnings by using the correct function names in their kernel-doc notation: drivers/mtd/ubi/eba.c:72: warning: expecting prototype for next_sqnum(). Prototype was for ubi_next_sqnum() instead drivers/mtd/ubi/wl.c:176: warning: expecting prototype for wl_tree_destroy(). Prototype was for wl_entry_destroy() instead drivers/mtd/ubi/misc.c:24: warning: expecting prototype for calc_data_len(). Prototype was for ubi_calc_data_len() instead Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Richard Weinberger <richard@nod.at> Cc: Miquel Raynal <miquel.raynal@bootlin.com> Cc: Vignesh Raghavendra <vigneshr@ti.com> Cc: linux-mtd@lists.infradead.org Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-02ubi: ubi_wl_put_peb: Fix infinite loop when wear-leveling work failedZhihao Cheng1-2/+14
Following process will trigger an infinite loop in ubi_wl_put_peb(): ubifs_bgt ubi_bgt ubifs_leb_unmap ubi_leb_unmap ubi_eba_unmap_leb ubi_wl_put_peb wear_leveling_worker e1 = rb_entry(rb_first(&ubi->used) e2 = get_peb_for_wl(ubi) ubi_io_read_vid_hdr // return err (flash fault) out_error: ubi->move_from = ubi->move_to = NULL wl_entry_destroy(ubi, e1) ubi->lookuptbl[e->pnum] = NULL retry: e = ubi->lookuptbl[pnum]; // return NULL if (e == ubi->move_from) { // NULL == NULL gets true goto retry; // infinite loop !!! $ top PID USER PR NI VIRT RES SHR S %CPU %MEM COMMAND 7676 root 20 0 0 0 0 R 100.0 0.0 ubifs_bgt0_0 Fix it by: 1) Letting ubi_wl_put_peb() returns directly if wearl leveling entry has been removed from 'ubi->lookuptbl'. 2) Using 'ubi->wl_lock' protecting wl entry deletion to preventing an use-after-free problem for wl entry in ubi_wl_put_peb(). Fetch a reproducer in [Link]. Fixes: 43f9b25a9cdd7b1 ("UBI: bugfix: protect from volume removal") Fixes: ee59ba8b064f692 ("UBI: Fix stale pointers in ubi->lookuptbl") Link: https://bugzilla.kernel.org/show_bug.cgi?id=216111 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-02ubi: Fix UAF wear-leveling entry in eraseblk_count_seq_show()Zhihao Cheng1-1/+8
Wear-leveling entry could be freed in error path, which may be accessed again in eraseblk_count_seq_show(), for example: __erase_worker eraseblk_count_seq_show wl = ubi->lookuptbl[*block_number] if (wl) wl_entry_destroy ubi->lookuptbl[e->pnum] = NULL kmem_cache_free(ubi_wl_entry_slab, e) erase_count = wl->ec // UAF! Wear-leveling entry updating/accessing in ubi->lookuptbl should be protected by ubi->wl_lock, fix it by adding ubi->wl_lock to serialize wl entry accessing between wl_entry_destroy() and eraseblk_count_seq_show(). Fetch a reproducer in [Link]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=216305 Fixes: 7bccd12d27b7e3 ("ubi: Add debugfs file for tracking PEB state") Fixes: 801c135ce73d5d ("UBI: Unsorted Block Images") Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-09-21ubi: fastmap: Fix typo in commentsZhang Jiaming1-1/+1
There are a typo(dont't) in comments. Fix it. Signed-off-by: Zhang Jiaming <jiaming@nfschina.com> Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-09-21ubi: Fix repeated words in commentsJilin Yuan1-3/+3
Delete the redundant word 'a'. Delete the redundant word 'the'. Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-05-27ubi: fastmap: Check wl_pool for free peb before wear levelingZhihao Cheng1-2/+12
UBI fetches free peb from wl_pool during wear leveling, so UBI should check wl_pool's empty status before wear leveling. Otherwise, UBI will miss wear leveling chances when free pebs are run out. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-05-27ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not emptyZhihao Cheng1-9/+10
There at least 6 PEBs reserved on UBI device: 1. EBA_RESERVED_PEBS[1] 2. WL_RESERVED_PEBS[1] 3. UBI_LAYOUT_VOLUME_EBS[2] 4. MIN_FASTMAP_RESERVED_PEBS[2] When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS + WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1]) free PEBs. Since commit f9c34bb529975fe ("ubi: Fix producing anchor PEBs") and commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] - FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after filling pool, wl_pool is always empty. So, UBI could be stuck in an infinite loop: ubi_thread system_wq wear_leveling_worker <-------------------------------------------------- get_peb_for_wl | // fm_wl_pool, used = size = 0 | schedule_work(&ubi->fm_work) | | update_fastmap_work_fn | ubi_update_fastmap | ubi_refill_pools | // ubi->free_count - ubi->beb_rsvd_pebs < 5 | // wl_pool is not filled with any PEBs | schedule_erase(old_fm_anchor) | ubi_ensure_anchor_pebs | __schedule_ubi_work(wear_leveling_worker) | | __erase_worker | ensure_wear_leveling | __schedule_ubi_work(wear_leveling_worker) -------------------------- , which cause high cpu usage of ubi_bgt: top - 12:10:42 up 5 min, 2 users, load average: 1.76, 0.68, 0.27 Tasks: 123 total, 3 running, 54 sleeping, 0 stopped, 0 zombie PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1589 root 20 0 0 0 0 R 45.0 0.0 0:38.86 ubi_bgt0d 319 root 20 0 0 0 0 I 15.2 0.0 0:15.29 kworker/0:3-eve 371 root 20 0 0 0 0 I 14.9 0.0 0:12.85 kworker/3:3-eve 20 root 20 0 0 0 0 I 11.3 0.0 0:05.33 kworker/1:0-eve 202 root 20 0 0 0 0 I 11.3 0.0 0:04.93 kworker/2:3-eve In commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules"), there are three key changes: 1) Choose the fastmap anchor when the most free PEBs are available. 2) Enable anchor move within the anchor area again as it is useful for distributing wear. 3) Import a candidate fm anchor and check this PEB's erase count during wear leveling. If the wear leveling limit is exceeded, use the used anchor area PEB with the lowest erase count to replace it. The anchor candidate can be removed, we can check fm_anchor PEB's erase count during wear leveling. Fix it by: 1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling. 2) Preferentially filling one free peb into fm_wl_pool in condition of ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough free count for fastmap non anchor pebs after the above prerequisites are met. Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling ubi_refill_pools() with all erase works done. Fetch a reproducer in [Link]. Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-11-20mtd: ubi: wl: Fix a couple of kernel-doc issuesLee Jones1-2/+1
Fixes the following W=1 kernel build warning(s): drivers/mtd/ubi/wl.c:584: warning: Function parameter or member 'nested' not described in 'schedule_erase' drivers/mtd/ubi/wl.c:1075: warning: Excess function parameter 'shutdown' description in '__erase_worker' Cc: Richard Weinberger <richard@nod.at> Cc: Miquel Raynal <miquel.raynal@bootlin.com> Cc: Vignesh Raghavendra <vigneshr@ti.com> Cc: linux-mtd@lists.infradead.org Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20201109182206.3037326-13-lee.jones@linaro.org
2020-09-17ubi: check kthread_should_stop() after the setting of task stateZhihao Cheng1-0/+13
A detach hung is possible when a race occurs between the detach process and the ubi background thread. The following sequences outline the race: ubi thread: if (list_empty(&ubi->works)... ubi detach: set_bit(KTHREAD_SHOULD_STOP, &kthread->flags) => by kthread_stop() wake_up_process() => ubi thread is still running, so 0 is returned ubi thread: set_current_state(TASK_INTERRUPTIBLE) schedule() => ubi thread will never be scheduled again ubi detach: wait_for_completion() => hung task! To fix that, we need to check kthread_should_stop() after we set the task state, so the ubi thread will either see the stop bit and exit or the task state is reset to runnable such that it isn't scheduled out indefinitely. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Cc: <stable@vger.kernel.org> Fixes: 801c135ce73d5df1ca ("UBI: Unsorted Block Images") Reported-by: syzbot+853639d0cb16c31c7a14@syzkaller.appspotmail.com Signed-off-by: Richard Weinberger <richard@nod.at>
2020-08-03ubi: fastmap: Don't produce the initial next anchor PEB when fastmap is disabledZhihao Cheng1-1/+2
Following process triggers a memleak caused by forgetting to release the initial next anchor PEB (CONFIG_MTD_UBI_FASTMAP is disabled): 1. attach -> __erase_worker -> produce the initial next anchor PEB 2. detach -> ubi_fastmap_close (Do nothing, it should have released the initial next anchor PEB) Don't produce the initial next anchor PEB in __erase_worker() when fastmap is disabled. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Suggested-by: Sascha Hauer <s.hauer@pengutronix.de> Fixes: f9c34bb529975fe ("ubi: Fix producing anchor PEBs") Reported-by: syzbot+d9aab50b1154e3d163f5@syzkaller.appspotmail.com Signed-off-by: Richard Weinberger <richard@nod.at>
2020-06-02ubi: Select fastmap anchor PEBs considering wear level rulesArne Edholm1-9/+19
There is a risk that the fastmap anchor PEB is alternating between just two PEBs, the current anchor and the previous anchor that was just deleted. As the fastmap pools gets the first take on free PEBs, the pools may leave no free PEBs to be selected as the new anchor, resulting in the two PEBs alternating behaviour. If the anchor PEBs gets a high erase count the PEBs will not be used by the pools but remain in ubi->free, even more increasing the likelihood they will be used as anchors. Getting stuck using only a couple of PEBs continuously will result in an uneven wear, eventually leading to failure. To fix this: - Choose the fastmap anchor when the most free PEBs are available. This is during rebuilding of the fastmap pools, after the unused pool PEBs are added to ubi->free but before the pools are populated again from the free PEBs. Also reserve an additional second best PEB as a candidate for the next time the fast map anchor is updated. If a better PEB is found the next time the fast map anchor is updated, the candidate is made available for building the pools. - Enable anchor move within the anchor area again as it is useful for distributing wear. - The anchor candidate for the next fastmap update is the most suited free PEB. Check this PEB's erase count during wear leveling. If the wear leveling limit is exceeded, the PEB is considered unsuitable for now. As all other non used anchor area PEBs should be even worse, free up the used anchor area PEB with the lowest erase count. Signed-off-by: Arne Edholm <arne.edholm@axis.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-03-31ubi: fastmap: Only produce the initial anchor PEB when fastmap is usedHou Tao1-1/+2
Don't produce the initial anchor PEB when ubi device is read-only or fastmap is disabled, else the resulting PEB will be unusable to any volume. Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-01-17ubi: wl: Remove set but not used variable 'prev_e'YueHaibing1-2/+1
Fixes gcc '-Wunused-but-set-variable' warning: drivers/mtd/ubi/wl.c: In function 'find_wl_entry': drivers/mtd/ubi/wl.c:322:27: warning: variable 'prev_e' set but not used [-Wunused-but-set-variable] It's not used any more now, so remove it. Fixes: f9c34bb52997 ("ubi: Fix producing anchor PEBs") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-11-18ubi: Fix producing anchor PEBsSascha Hauer1-18/+14
When a new fastmap is about to be written UBI must make sure it has a free block for a fastmap anchor available. For this ubi_update_fastmap() calls ubi_ensure_anchor_pebs(). This stopped working with 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()"), with this commit the wear leveling code is blocked and can no longer produce free PEBs. UBI then more often than not falls back to write the new fastmap anchor to the same block it was already on which means the same erase block gets erased during each fastmap write and wears out quite fast. As the locking prevents us from producing the anchor PEB when we actually need it, this patch changes the strategy for creating the anchor PEB. We no longer create it on demand right before we want to write a fastmap, but instead we create an anchor PEB right after we have written a fastmap. This gives us enough time to produce a new anchor PEB before it is needed. To make sure we have an anchor PEB for the very first fastmap write we call ubi_ensure_anchor_pebs() during initialisation as well. Fixes: 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()") Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-09-15ubi: Don't do anchor move within fastmap areaRichard Weinberger1-0/+6
To make sure that Fastmap can use a PEB within the first 64 PEBs, UBI moves blocks away from that area. It uses regular wear-leveling for that job. An anchor move can be triggered if no PEB is free in this area or because of anticipation. In the latter case it can happen that UBI decides to move a block but finds a free PEB within the same area. This case is in vain an increases only erase counters. Catch this case and cancel wear-leveling if this happens. Signed-off-by: Richard Weinberger <richard@nod.at>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156Thomas Gleixner1-14/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 1334 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-07ubi: wl: Fix uninitialized variableGustavo A. R. Silva1-1/+1
There is a potential execution path in which variable *err* is compared against UBI_IO_BITFLIPS without being properly initialized previously. Fix this by initializing variable *err* to 0. Addresses-Coverity-ID: 1477298 "(Uninitialized scalar variable") Fixes: 663586c0a892 ("ubi: Expose the bitrot interface") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-03-05ubi: wl: Silence uninitialized variable warningDan Carpenter1-1/+1
This condition needs to be fipped around because "err" is uninitialized when "force" is set. The Smatch static analysis tool complains and UBsan will also complain at runtime. Fixes: 663586c0a892 ("ubi: Expose the bitrot interface") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-02-24ubi: Expose the bitrot interfaceRichard Weinberger1-0/+144
Using UBI_IOCRPEB and UBI_IOCSPEB userspace can force reading and scrubbing of PEBs. In case of bitflips UBI will automatically take action and move data to a different PEB. This interface allows a daemon to foster your NAND. Signed-off-by: Richard Weinberger <richard@nod.at>
2019-02-24ubi: Introduce in_pq()Richard Weinberger1-7/+23
This function works like in_wl_tree() but checks whether an ubi_wl_entry is currently in the protection queue. We need this function to query the current state of an ubi_wl_entry. Signed-off-by: Richard Weinberger <richard@nod.at>
2018-06-13treewide: kzalloc() -> kcalloc()Kees Cook1-1/+1
The kzalloc() function has a 2-factor argument form, kcalloc(). This patch replaces cases of: kzalloc(a * b, gfp) with: kcalloc(a * b, gfp) as well as handling cases of: kzalloc(a * b * c, gfp) with: kzalloc(array3_size(a, b, c), gfp) as it's slightly less ugly than: kzalloc_array(array_size(a, b), c, gfp) This does, however, attempt to ignore constant size factors like: kzalloc(4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( kzalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | kzalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( kzalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(char) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(u8) * COUNT + COUNT , ...) | kzalloc( - sizeof(__u8) * COUNT + COUNT , ...) | kzalloc( - sizeof(char) * COUNT + COUNT , ...) | kzalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( - kzalloc + kcalloc ( - sizeof(TYPE) * (COUNT_ID) + COUNT_ID, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * COUNT_ID + COUNT_ID, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * (COUNT_CONST) + COUNT_CONST, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * COUNT_CONST + COUNT_CONST, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (COUNT_ID) + COUNT_ID, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * COUNT_ID + COUNT_ID, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (COUNT_CONST) + COUNT_CONST, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * COUNT_CONST + COUNT_CONST, sizeof(THING) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ - kzalloc + kcalloc ( - SIZE * COUNT + COUNT, SIZE , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( kzalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( kzalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kzalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kzalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( kzalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products, // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( kzalloc(C1 * C2 * C3, ...) | kzalloc( - (E1) * E2 * E3 + array3_size(E1, E2, E3) , ...) | kzalloc( - (E1) * (E2) * E3 + array3_size(E1, E2, E3) , ...) | kzalloc( - (E1) * (E2) * (E3) + array3_size(E1, E2, E3) , ...) | kzalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants, // keeping sizeof() as the second factor argument. @@ expression THING, E1, E2; type TYPE; constant C1, C2, C3; @@ ( kzalloc(sizeof(THING) * C2, ...) | kzalloc(sizeof(TYPE) * C2, ...) | kzalloc(C1 * C2 * C3, ...) | kzalloc(C1 * C2, ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * (E2) + E2, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * E2 + E2, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (E2) + E2, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * E2 + E2, sizeof(THING) , ...) | - kzalloc + kcalloc ( - (E1) * E2 + E1, E2 , ...) | - kzalloc + kcalloc ( - (E1) * (E2) + E1, E2 , ...) | - kzalloc + kcalloc ( - E1 * E2 + E1, E2 , ...) ) Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-07ubi: fastmap: Cancel work upon detachRichard Weinberger1-3/+1
Ben Hutchings pointed out that 29b7a6fa1ec0 ("ubi: fastmap: Don't flush fastmap work on detach") does not really fix the problem, it just reduces the risk to hit the race window where fastmap work races against free()'ing ubi->volumes[]. The correct approach is making sure that no more fastmap work is in progress before we free ubi data structures. So we cancel fastmap work right after the ubi background thread is stopped. By setting ubi->thread_enabled to zero we make sure that no further work tries to wake the thread. Fixes: 29b7a6fa1ec0 ("ubi: fastmap: Don't flush fastmap work on detach") Fixes: 74cdaf24004a ("UBI: Fastmap: Fix memory leaks while closing the WL sub-system") Cc: stable@vger.kernel.org Cc: Ben Hutchings <ben.hutchings@codethink.co.uk> Cc: Martin Townsend <mtownsend1973@gmail.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-18mtd: ubi: wl: Fix error return code in ubi_wl_init()Wei Yongjun1-2/+6
Fix to return error code -ENOMEM from the kmem_cache_alloc() error handling case instead of 0, as done elsewhere in this function. Fixes: f78e5623f45b ("ubi: fastmap: Erase outdated anchor PEBs during attach") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-18ubi: Fastmap: Fix typoSascha Hauer1-1/+1
Fix misspelling of 'available' in function name. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-17ubi: fastmap: Erase outdated anchor PEBs during attachSascha Hauer1-20/+57
The fastmap update code might erase the current fastmap anchor PEB in case it doesn't find any new free PEB. When a power cut happens in this situation we must not have any outdated fastmap anchor PEB on the device, because that would be used to attach during next boot. The easiest way to make that sure is to erase all outdated fastmap anchor PEBs synchronously during attach. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Reviewed-by: Richard Weinberger <richard@nod.at> Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Cc: <stable@vger.kernel.org> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02ubi: Fix races around ubi_refill_pools()Richard Weinberger1-6/+14
When writing a new Fastmap the first thing that happens is refilling the pools in memory. At this stage it is possible that new PEBs from the new pools get already claimed and written with data. If this happens before the new Fastmap data structure hits the flash and we face power cut the freshly written PEB will not scanned and unnoticed. Solve the issue by locking the pools until Fastmap is written. Cc: <stable@vger.kernel.org> Fixes: dbb7d2a88d ("UBI: Add fastmap core") Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02ubi: Deal with interrupted erasures in WLRichard Weinberger1-2/+19
When Fastmap is used we can face here an -EBADMSG since Fastmap cannot know about unmaps. If the erasure was interrupted the PEB may show ECC errors and UBI would go to ro-mode as it assumes that the PEB was check during attach time, which is not the case with Fastmap. Cc: <stable@vger.kernel.org> Fixes: dbb7d2a88d ("UBI: Add fastmap core") Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: introduce the VID buffer conceptBoris Brezillon1-8/+11
Currently, all VID headers are allocated and freed using the ubi_zalloc_vid_hdr() and ubi_free_vid_hdr() function. These functions make sure to align allocation on ubi->vid_hdr_alsize and adjust the vid_hdr pointer to match the ubi->vid_hdr_shift requirements. This works fine, but is a bit convoluted. Moreover, the future introduction of LEB consolidation (needed to support MLC/TLC NANDs) will allows a VID buffer to contain more than one VID header. Hence the creation of a ubi_vid_io_buf struct to attach extra information to the VID header. We currently only store the actual pointer of the underlying buffer, but will soon add the number of VID headers contained in the buffer. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-30ubi: Rework Fastmap attach base codeRichard Weinberger1-8/+33
Introduce a new list to the UBI attach information object to be able to deal better with old and corrupted Fastmap eraseblocks. Also move more Fastmap specific code into fastmap.c. Signed-off-by: Richard Weinberger <richard@nod.at>
2016-05-24UBI: Set free_count to zero before walking through erase listHeiko Schocher1-1/+1
Set free_count to zero before walking through ai->erase list in wl_init(). Found in U-Boot as U-Boot has no workqueue/threads, it immediately calls erase_worker(), which increase for each erased block free_count. Without this patch, free_count gets after this initialized to zero in wl_init(), so the free_count variable always has the maybe wrong value 0 in U-Boot. Signed-off-by: Heiko Schocher <hs@denx.de> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-01-10mtd: ubi: wl: avoid erasing a PEB which is emptySebastian Siewior1-3/+18
wear_leveling_worker() currently unconditionally puts a PEB on erase in the error case even it just been taken from the free_list and never used. In case the PEB was never used it can be put back on the free list saving a precious erase cycle. v1…v2: - to_leb_clean -> dst_leb_clean - use the nested option for ensure_wear_leveling() - do_sync_erase() can't go -ENOMEM so we can just go into RO-mode now. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-12-17mtd: ubi: don't leak e if schedule_erase() failsSebastian Siewior1-0/+1
If __erase_worker() fails to erase the EB and schedule_erase() fails as well to do anything about it then we go RO. But that is not a reason to leak the e argument here. Therefore clean up e. Cc: <stable@vger.kernel.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-12-17mtd: ubi: fixup error correction in do_sync_erase()Sebastian Siewior1-24/+28
Since fastmap we gained do_sync_erase(). This function can return an error and its error handling isn't obvious. First the memory allocation for struct ubi_work can fail and as such struct ubi_wl_entry is leaked. However if the memory allocation succeeds then the tail function takes care of the struct ubi_wl_entry. A free here could result in a double free. To make the error handling simpler, I split the tail function into one piece which does the work and another which frees the struct ubi_work which is passed as argument. As result do_sync_erase() can keep the struct on stack and we get rid of one error source. Cc: <stable@vger.kernel.org> Fixes: 8199b901a ("UBI: Add fastmap support to the WL sub-system") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-09-29UBI: return ENOSPC if no enough space availableshengyong1-0/+1
UBI: attaching mtd1 to ubi0 UBI: scanning is finished UBI error: init_volumes: not enough PEBs, required 706, available 686 UBI error: ubi_wl_init: no enough physical eraseblocks (-20, need 1) UBI error: ubi_attach_mtd_dev: failed to attach mtd1, error -12 <= NOT ENOMEM UBI error: ubi_init: cannot attach mtd1 If available PEBs are not enough when initializing volumes, return -ENOSPC directly. If available PEBs are not enough when initializing WL, return -ENOSPC instead of -ENOMEM. Cc: stable@vger.kernel.org Signed-off-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at> Reviewed-by: David Gstir <david@sigma-star.at>
2015-06-03UBI: Remove unnecessary `\'shengyong1-1/+1
Signed-off-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Remove is_fm_block()Richard Weinberger1-5/+0
This function was added to fastmap in a very early stage to have paranoid assertions. With the current fastmap implementation this assert will never trigger as fastmap PEBs are not seen by the WL sub-system. Remove it to save us some CPU cycles. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Introduce may_reserve_for_fm()Richard Weinberger1-6/+1
...and kill another #ifdef. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Introduce ubi_fastmap_init()Richard Weinberger1-7/+1
...and kill another #ifdef in wl.c. :-) Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Move fastmap specific functions out of wl.cRichard Weinberger1-463/+97
Fastmap is tightly connected to the WL sub-system, many fastmap-specific functionslive in wl.c. To get rid of most #ifdefs in wl.c move this functions into a new file and include it into wl.c Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fix stale pointers in ubi->lookuptblRichard Weinberger1-16/+31
In some error paths the WL sub-system gives up on a PEB and frees it's ubi_wl_entry struct but does not set the entry in ubi->lookuptbl to NULL. Fastmap can stumble over such a stale pointer as it uses ubi->lookuptbl to find all PEBs. Fix this by introducing a new helper function which free()s a WL entry and removes the reference from the lookup table. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Locking updatesRichard Weinberger1-7/+12
a) Rename ubi->fm_sem to ubi->fm_eba_sem as this semaphore protects EBA changes. b) Turn ubi->fm_mutex into a rw semaphore. It will still serialize fastmap writes but also ensures that ubi_wl_put_peb() is not interrupted by a fastmap write. We use a rw semaphore to allow ubi_wl_put_peb() still to be executed in parallel if no fastmap write is happening. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Fix race after ubi_wl_get_peb()Richard Weinberger1-0/+7
ubi_wl_get_peb() returns a fresh PEB which can be used by user of UBI. Due to the pool logic fastmap will correctly map this PEB upon attach time because it will be scanned. If a new fastmap is written (due to heavy parallel io) while the before the fresh PEB is assigned to the EBA table it will not be scanned as it is no longer in the pool. So, the race window exists between ubi_wl_get_peb() and the EBA table assignment. We have to make sure that no new fastmap can be written while that. To ensure that ubi_wl_get_peb() will grab ubi->fm_sem in read mode and the user of ubi_wl_get_peb() has to release it after the PEB got assigned. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Notify user in case of an ubi_update_fastmap() failureRichard Weinberger1-1/+5
If ubi_update_fastmap() fails notify the user. This is not a hard error as ubi_update_fastmap() makes sure that upon failure the current on-flash fastmap will no be used upon next UBI attach. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Fix memory leaks while closing the WL sub-systemRichard Weinberger1-0/+18
Add a ubi_fastmap_close() to free all resources used by fastmap at WL shutdown. Signed-off-by: Richard Weinberger <richard@nod.at> Tested-by: Guido Martínez <guido@vanguardiasur.com.ar> Reviewed-by: Guido Martínez <guido@vanguardiasur.com.ar>
2015-03-27UBI: Fastmap: Don't allocate new ubi_wl_entry objectsRichard Weinberger1-3/+0
There is no need to allocate new ones every time, we can reuse the existing ones. This makes the code cleaner and more easy to follow. Signed-off-by: Richard Weinberger <richard@nod.at> Reviewed-by: Tanya Brokhman <tlinder@codeaurora.org> Reviewed-by: Guido Martínez <guido@vanguardiasur.com.ar>