summaryrefslogtreecommitdiff
path: root/init
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-10-10 20:41:21 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2022-10-10 20:41:21 +0300
commit8adc0486f3c85e3c1e40c1ce6884317a17c380d3 (patch)
treea65b510b86abd64c2d9ec2ee4d49bda2e9c438be /init
parent52abb27abfff8c5ddf44eef4d759f3d1e9f166c5 (diff)
parenta890d1c657ecba73a7b28591c92587aef1be1888 (diff)
downloadlinux-8adc0486f3c85e3c1e40c1ce6884317a17c380d3.tar.xz
Merge tag 'random-6.1-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random
Pull random number generator updates from Jason Donenfeld: - Huawei reported that when they updated their kernel from 4.4 to something much newer, some userspace code they had broke, the culprit being the accidental removal of O_NONBLOCK from /dev/random way back in 5.6. It's been gone for over 2 years now and this is the first we've heard of it, but userspace breakage is userspace breakage, so O_NONBLOCK is now back. - Use randomness from hardware RNGs much more often during early boot, at the same interval that crng reseeds are done, from Dominik. - A semantic change in hardware RNG throttling, so that the hwrng framework can properly feed random.c with randomness from hardware RNGs that aren't specifically marked as creditable. A related patch coming to you via Herbert's hwrng tree depends on this one, not to compile, but just to function properly, so you may want to merge this PULL before that one. - A fix to clamp credited bits from the interrupts pool to the size of the pool sample. This is mainly just a theoretical fix, as it'd be pretty hard to exceed it in practice. - Oracle reported that InfiniBand TCP latency regressed by around 10-15% after a change a few cycles ago made at the request of the RT folks, in which we hoisted a somewhat rare operation (1 in 1024 times) out of the hard IRQ handler and into a workqueue, a pretty common and boring pattern. It turns out, though, that scheduling a worker from there has overhead of its own, whereas scheduling a timer on that same CPU for the next jiffy amortizes better and doesn't incur the same overhead. I also eliminated a cache miss by moving the work_struct (and subsequently, the timer_list) to below a critical cache line, so that the more critical members that are accessed on every hard IRQ aren't split between two cache lines. - The boot-time initialization of the RNG has been split into two approximate phases: what we can accomplish before timekeeping is possible and what we can accomplish after. This winds up being useful so that we can use RDRAND to seed the RNG before CONFIG_SLAB_FREELIST_RANDOM=y systems initialize slabs, in addition to other early uses of randomness. The effect is that systems with RDRAND (or a bootloader seed) will never see any warnings at all when setting CONFIG_WARN_ALL_UNSEEDED_RANDOM=y. And kfence benefits from getting a better seed of its own. - Small systems without much entropy sometimes wind up putting some truncated serial number read from flash into hostname, so contribute utsname changes to the RNG, without crediting. - Add smaller batches to serve requests for smaller integers, and make use of them when people ask for random numbers bounded by a given compile-time constant. This has positive effects all over the tree, most notably in networking and kfence. - The original jitter algorithm intended (I believe) to schedule the timer for the next jiffy, not the next-next jiffy, yet it used mod_timer(jiffies + 1), which will fire on the next-next jiffy, instead of what I believe was intended, mod_timer(jiffies), which will fire on the next jiffy. So fix that. - Fix a comment typo, from William. * tag 'random-6.1-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: random: clear new batches when bringing new CPUs online random: fix typos in get_random_bytes() comment random: schedule jitter credit for next jiffy, not in two jiffies prandom: make use of smaller types in prandom_u32_max random: add 8-bit and 16-bit batches utsname: contribute changes to RNG random: use init_utsname() instead of utsname() kfence: use better stack hash seed random: split initialization into early step and later step random: use expired timer rather than wq for mixing fast pool random: avoid reading two cache lines on irq randomness random: clamp credited irq bits to maximum mixed random: throttle hwrng writes if no entropy is credited random: use hwgenerator randomness more frequently at early boot random: restore O_NONBLOCK support
Diffstat (limited to 'init')
-rw-r--r--init/main.c17
1 files changed, 8 insertions, 9 deletions
diff --git a/init/main.c b/init/main.c
index 1fe7942f5d4a..0866e5d0d467 100644
--- a/init/main.c
+++ b/init/main.c
@@ -976,6 +976,9 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
parse_args("Setting extra init args", extra_init_args,
NULL, 0, -1, -1, NULL, set_init_arg);
+ /* Architectural and non-timekeeping rng init, before allocator init */
+ random_init_early(command_line);
+
/*
* These use large bootmem allocations and must precede
* kmem_cache_init()
@@ -1035,17 +1038,13 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
hrtimers_init();
softirq_init();
timekeeping_init();
- kfence_init();
time_init();
- /*
- * For best initial stack canary entropy, prepare it after:
- * - setup_arch() for any UEFI RNG entropy and boot cmdline access
- * - timekeeping_init() for ktime entropy used in random_init()
- * - time_init() for making random_get_entropy() work on some platforms
- * - random_init() to initialize the RNG from from early entropy sources
- */
- random_init(command_line);
+ /* This must be after timekeeping is initialized */
+ random_init();
+
+ /* These make use of the fully initialized rng */
+ kfence_init();
boot_init_stack_canary();
perf_event_init();