commit 4ffa4be5a14beeb008bd2b4fbc681222bfec90c7 Author: Greg Kroah-Hartman Date: Sat Jun 25 11:45:20 2022 +0200 Linux 4.9.320 Link: https://lore.kernel.org/r/20220623164344.053938039@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Pavel Machek (CIP) Tested-by: Shuah Khan Tested-by: Jon Hunter Tested-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman commit a81a6b204a303116e64e0a6288b701cbda9d4de7 Author: Willy Tarreau Date: Mon May 2 10:46:14 2022 +0200 tcp: drop the hash_32() part from the index calculation commit e8161345ddbb66e449abde10d2fdce93f867eba9 upstream. In commit 190cc82489f4 ("tcp: change source port randomizarion at connect() time"), the table_perturb[] array was introduced and an index was taken from the port_offset via hash_32(). But it turns out that hash_32() performs a multiplication while the input here comes from the output of SipHash in secure_seq, that is well distributed enough to avoid the need for yet another hash. Suggested-by: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 3c78eea640f69e2198b69128173e6d65a0bcdc02 Author: Willy Tarreau Date: Mon May 2 10:46:13 2022 +0200 tcp: increase source port perturb table to 2^16 commit 4c2c8f03a5ab7cb04ec64724d7d176d00bcc91e5 upstream. Moshe Kol, Amit Klein, and Yossi Gilad reported being able to accurately identify a client by forcing it to emit only 40 times more connections than there are entries in the table_perturb[] table. The previous two improvements consisting in resalting the secret every 10s and adding randomness to each port selection only slightly improved the situation, and the current value of 2^8 was too small as it's not very difficult to make a client emit 10k connections in less than 10 seconds. Thus we're increasing the perturb table from 2^8 to 2^16 so that the same precision now requires 2.6M connections, which is more difficult in this time frame and harder to hide as a background activity. The impact is that the table now uses 256 kB instead of 1 kB, which could mostly affect devices making frequent outgoing connections. However such components usually target a small set of destinations (load balancers, database clients, perf assessment tools), and in practice only a few entries will be visited, like before. A live test at 1 million connections per second showed no performance difference from the previous value. Reported-by: Moshe Kol Reported-by: Yossi Gilad Reported-by: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit dd82067bd6cabbc25aa0f459e91a8e5e08fa4782 Author: Willy Tarreau Date: Mon May 2 10:46:12 2022 +0200 tcp: dynamically allocate the perturb table used by source ports commit e9261476184be1abd486c9434164b2acbe0ed6c2 upstream. We'll need to further increase the size of this table and it's likely that at some point its size will not be suitable anymore for a static table. Let's allocate it on boot from inet_hashinfo2_init(), which is called from tcp_init(). Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski [bwh: Backported to 4.9: - There is no inet_hashinfo2_init(), so allocate the table in inet_hashinfo_init() when called by TCP - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit aa7722529f6d7f3be1dd7b94dcce3f2689ba9756 Author: Willy Tarreau Date: Mon May 2 10:46:11 2022 +0200 tcp: add small random increments to the source port commit ca7af0402550f9a0b3316d5f1c30904e42ed257d upstream. Here we're randomly adding between 0 and 7 random increments to the selected source port in order to add some noise in the source port selection that will make the next port less predictable. With the default port range of 32768-60999 this means a worst case reuse scenario of 14116/8=1764 connections between two consecutive uses of the same port, with an average of 14116/4.5=3137. This code was stressed at more than 800000 connections per second to a fixed target with all connections closed by the client using RSTs (worst condition) and only 2 connections failed among 13 billion, despite the hash being reseeded every 10 seconds, indicating a perfectly safe situation. Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 2ed413f140bbb527745e3b42550f44d07c9dfd2a Author: Willy Tarreau Date: Mon May 2 10:46:09 2022 +0200 tcp: use different parts of the port_offset for index and offset commit 9e9b70ae923baf2b5e8a0ea4fd0c8451801ac526 upstream. Amit Klein suggests that we use different parts of port_offset for the table's index and the port offset so that there is no direct relation between them. Cc: Jason A. Donenfeld Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 576696ed0dee677ec868960c39d96ae3b8c95a3f Author: Willy Tarreau Date: Mon May 2 10:46:08 2022 +0200 secure_seq: use the 64 bits of the siphash for port offset calculation commit b2d057560b8107c633b39aabe517ff9d93f285e3 upstream. SipHash replaced MD5 in secure_ipv{4,6}_port_ephemeral() via commit 7cd23e5300c1 ("secure_seq: use SipHash in place of MD5"), but the output remained truncated to 32-bit only. In order to exploit more bits from the hash, let's make the functions return the full 64-bit of siphash_3u32(). We also make sure the port offset calculation in __inet_hash_connect() remains done on 32-bit to avoid the need for div_u64_rem() and an extra cost on 32-bit systems. Cc: Jason A. Donenfeld Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 05a12e5c40635bb8b56fbb344f8fd6cbf97e748a Author: Eric Dumazet Date: Tue Feb 9 11:20:28 2021 -0800 tcp: add some entropy in __inet_hash_connect() commit c579bd1b4021c42ae247108f1e6f73dd3f08600c upstream. Even when implementing RFC 6056 3.3.4 (Algorithm 4: Double-Hash Port Selection Algorithm), a patient attacker could still be able to collect enough state from an otherwise idle host. Idea of this patch is to inject some noise, in the cases __inet_hash_connect() found a candidate in the first attempt. This noise should not significantly reduce the collision avoidance, and should be zero if connection table is already well used. Note that this is not implementing RFC 6056 3.3.5 because we think Algorithm 5 could hurt typical workloads. Signed-off-by: Eric Dumazet Cc: David Dworken Cc: Willem de Bruijn Signed-off-by: David S. Miller Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 136b4799419a275155c74e8a637cefd8a0538321 Author: Eric Dumazet Date: Tue Feb 9 11:20:27 2021 -0800 tcp: change source port randomizarion at connect() time commit 190cc82489f46f9d88e73c81a47e14f80a791e1a upstream. RFC 6056 (Recommendations for Transport-Protocol Port Randomization) provides good summary of why source selection needs extra care. David Dworken reminded us that linux implements Algorithm 3 as described in RFC 6056 3.3.3 Quoting David : In the context of the web, this creates an interesting info leak where websites can count how many TCP connections a user's computer is establishing over time. For example, this allows a website to count exactly how many subresources a third party website loaded. This also allows: - Distinguishing between different users behind a VPN based on distinct source port ranges. - Tracking users over time across multiple networks. - Covert communication channels between different browsers/browser profiles running on the same computer - Tracking what applications are running on a computer based on the pattern of how fast source ports are getting incremented. Section 3.3.4 describes an enhancement, that reduces attackers ability to use the basic information currently stored into the shared 'u32 hint'. This change also decreases collision rate when multiple applications need to connect() to different destinations. Signed-off-by: Eric Dumazet Reported-by: David Dworken Cc: Willem de Bruijn Signed-off-by: David S. Miller [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit b79d4d0da659a3c7bd1d5913e62188ceb9be9c49 Author: Miklos Szeredi Date: Mon Mar 7 16:30:44 2022 +0100 fuse: fix pipe buffer lifetime for direct_io commit 0c4bcfdecb1ac0967619ee7ff44871d93c08c909 upstream. In FOPEN_DIRECT_IO mode, fuse_file_write_iter() calls fuse_direct_write_iter(), which normally calls fuse_direct_io(), which then imports the write buffer with fuse_get_user_pages(), which uses iov_iter_get_pages() to grab references to userspace pages instead of actually copying memory. On the filesystem device side, these pages can then either be read to userspace (via fuse_dev_read()), or splice()d over into a pipe using fuse_dev_splice_read() as pipe buffers with &nosteal_pipe_buf_ops. This is wrong because after fuse_dev_do_read() unlocks the FUSE request, the userspace filesystem can mark the request as completed, causing write() to return. At that point, the userspace filesystem should no longer have access to the pipe buffer. Fix by copying pages coming from the user address space to new pipe buffers. Reported-by: Jann Horn Fixes: c3021629a0d8 ("fuse: support splice() reading from fuse device") Cc: Signed-off-by: Miklos Szeredi Signed-off-by: Zach O'Keefe Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit fd97de9c7b973f46a6103f4170c5efc7b8ef8797 Author: Linus Torvalds Date: Mon Mar 28 11:37:05 2022 -0700 Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE"" commit 901c7280ca0d5e2b4a8929fbe0bfb007ac2a6544 upstream. Halil Pasic points out [1] that the full revert of that commit (revert in bddac7c1e02b), and that a partial revert that only reverts the problematic case, but still keeps some of the cleanups is probably better.  And that partial revert [2] had already been verified by Oleksandr Natalenko to also fix the issue, I had just missed that in the long discussion. So let's reinstate the cleanups from commit aa6f8dcbab47 ("swiotlb: rework "fix info leak with DMA_FROM_DEVICE""), and effectively only revert the part that caused problems. Link: https://lore.kernel.org/all/20220328013731.017ae3e3.pasic@linux.ibm.com/ [1] Link: https://lore.kernel.org/all/20220324055732.GB12078@lst.de/ [2] Link: https://lore.kernel.org/all/4386660.LvFx2qVVIh@natalenko.name/ [3] Suggested-by: Halil Pasic Tested-by: Oleksandr Natalenko Cc: Christoph Hellwig Signed-off-by: Linus Torvalds [OP: backport to 4.14: apply swiotlb_tbl_map_single() changes in lib/swiotlb.c] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit c132f2ba716b5ee6b35f82226a6e5417d013d753 Author: Halil Pasic Date: Fri Feb 11 02:12:52 2022 +0100 swiotlb: fix info leak with DMA_FROM_DEVICE commit ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e upstream. The problem I'm addressing was discovered by the LTP test covering cve-2018-1000204. A short description of what happens follows: 1) The test case issues a command code 00 (TEST UNIT READY) via the SG_IO interface with: dxfer_len == 524288, dxdfer_dir == SG_DXFER_FROM_DEV and a corresponding dxferp. The peculiar thing about this is that TUR is not reading from the device. 2) In sg_start_req() the invocation of blk_rq_map_user() effectively bounces the user-space buffer. As if the device was to transfer into it. Since commit a45b599ad808 ("scsi: sg: allocate with __GFP_ZERO in sg_build_indirect()") we make sure this first bounce buffer is allocated with GFP_ZERO. 3) For the rest of the story we keep ignoring that we have a TUR, so the device won't touch the buffer we prepare as if the we had a DMA_FROM_DEVICE type of situation. My setup uses a virtio-scsi device and the buffer allocated by SG is mapped by the function virtqueue_add_split() which uses DMA_FROM_DEVICE for the "in" sgs (here scatter-gather and not scsi generics). This mapping involves bouncing via the swiotlb (we need swiotlb to do virtio in protected guest like s390 Secure Execution, or AMD SEV). 4) When the SCSI TUR is done, we first copy back the content of the second (that is swiotlb) bounce buffer (which most likely contains some previous IO data), to the first bounce buffer, which contains all zeros. Then we copy back the content of the first bounce buffer to the user-space buffer. 5) The test case detects that the buffer, which it zero-initialized, ain't all zeros and fails. One can argue that this is an swiotlb problem, because without swiotlb we leak all zeros, and the swiotlb should be transparent in a sense that it does not affect the outcome (if all other participants are well behaved). Copying the content of the original buffer into the swiotlb buffer is the only way I can think of to make swiotlb transparent in such scenarios. So let's do just that if in doubt, but allow the driver to tell us that the whole mapped buffer is going to be overwritten, in which case we can preserve the old behavior and avoid the performance impact of the extra bounce. Signed-off-by: Halil Pasic Signed-off-by: Christoph Hellwig [OP: backport to 4.14: apply swiotlb_tbl_map_single() changes in lib/swiotlb.c] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit ca6226b5c5b4cf8c41ab7c759686c9aab43a2a33 Author: Colin Ian King Date: Wed Jul 15 17:26:04 2020 +0100 xprtrdma: fix incorrect header size calculations commit 912288442cb2f431bf3c8cb097a5de83bc6dbac1 upstream. Currently the header size calculations are using an assignment operator instead of a += operator when accumulating the header size leading to incorrect sizes. Fix this by using the correct operator. Addresses-Coverity: ("Unused value") Fixes: 302d3deb2068 ("xprtrdma: Prevent inline overflow") Signed-off-by: Colin Ian King Reviewed-by: Chuck Lever Signed-off-by: Anna Schumaker [bwh: Backported to 4.9: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 26b3191524103948666bca1f26370b0aef1fe182 Author: Christian Borntraeger Date: Mon May 30 11:27:06 2022 +0200 s390/mm: use non-quiescing sske for KVM switch to keyed guest commit 3ae11dbcfac906a8c3a480e98660a823130dc16a upstream. The switch to a keyed guest does not require a classic sske as the other guest CPUs are not accessing the key before the switch is complete. By using the NQ SSKE things are faster especially with multiple guests. Signed-off-by: Christian Borntraeger Suggested-by: Janis Schoetterl-Glausch Reviewed-by: Claudio Imbrenda Link: https://lore.kernel.org/r/20220530092706.11637-3-borntraeger@linux.ibm.com Signed-off-by: Christian Borntraeger Signed-off-by: Heiko Carstens Signed-off-by: Greg Kroah-Hartman commit 267b8fa3f5bf8ca6458670298a02f7438855bd80 Author: James Chapman Date: Fri Feb 23 17:45:46 2018 +0000 l2tp: fix race in pppol2tp_release with session object destroy commit d02ba2a6110c530a32926af8ad441111774d2893 upstream. pppol2tp_release uses call_rcu to put the final ref on its socket. But the session object doesn't hold a ref on the session socket so may be freed while the pppol2tp_put_sk RCU callback is scheduled. Fix this by having the session hold a ref on its socket until the session is destroyed. It is this ref that is dropped via call_rcu. Sessions are also deleted via l2tp_tunnel_closeall. This must now also put the final ref via call_rcu. So move the call_rcu call site into pppol2tp_session_close so that this happens in both destroy paths. A common destroy path should really be implemented, perhaps with l2tp_tunnel_closeall calling l2tp_session_delete like pppol2tp_release does, but this will be looked at later. ODEBUG: activate active (active state 1) object type: rcu_head hint: (null) WARNING: CPU: 3 PID: 13407 at lib/debugobjects.c:291 debug_print_object+0x166/0x220 Modules linked in: CPU: 3 PID: 13407 Comm: syzbot_19c09769 Not tainted 4.16.0-rc2+ #38 Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 RIP: 0010:debug_print_object+0x166/0x220 RSP: 0018:ffff880013647a00 EFLAGS: 00010082 RAX: dffffc0000000008 RBX: 0000000000000003 RCX: ffffffff814d3333 RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff88001a59f6d0 RBP: ffff880013647a40 R08: 0000000000000000 R09: 0000000000000001 R10: ffff8800136479a8 R11: 0000000000000000 R12: 0000000000000001 R13: ffffffff86161420 R14: ffffffff85648b60 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88001a580000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020e77000 CR3: 0000000006022000 CR4: 00000000000006e0 Call Trace: debug_object_activate+0x38b/0x530 ? debug_object_assert_init+0x3b0/0x3b0 ? __mutex_unlock_slowpath+0x85/0x8b0 ? pppol2tp_session_destruct+0x110/0x110 __call_rcu.constprop.66+0x39/0x890 ? __call_rcu.constprop.66+0x39/0x890 call_rcu_sched+0x17/0x20 pppol2tp_release+0x2c7/0x440 ? fcntl_setlk+0xca0/0xca0 ? sock_alloc_file+0x340/0x340 sock_release+0x92/0x1e0 sock_close+0x1b/0x20 __fput+0x296/0x6e0 ____fput+0x1a/0x20 task_work_run+0x127/0x1a0 do_exit+0x7f9/0x2ce0 ? SYSC_connect+0x212/0x310 ? mm_update_next_owner+0x690/0x690 ? up_read+0x1f/0x40 ? __do_page_fault+0x3c8/0xca0 do_group_exit+0x10d/0x330 ? do_group_exit+0x330/0x330 SyS_exit_group+0x22/0x30 do_syscall_64+0x1e0/0x730 ? trace_hardirqs_off_thunk+0x1a/0x1c entry_SYSCALL_64_after_hwframe+0x42/0xb7 RIP: 0033:0x7f362e471259 RSP: 002b:00007ffe389abe08 EFLAGS: 00000202 ORIG_RAX: 00000000000000e7 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f362e471259 RDX: 00007f362e471259 RSI: 000000000000002e RDI: 0000000000000000 RBP: 00007ffe389abe30 R08: 0000000000000000 R09: 00007f362e944270 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000400b60 R13: 00007ffe389abf50 R14: 0000000000000000 R15: 0000000000000000 Code: 8d 3c dd a0 8f 64 85 48 89 fa 48 c1 ea 03 80 3c 02 00 75 7b 48 8b 14 dd a0 8f 64 85 4c 89 f6 48 c7 c7 20 85 64 85 e 8 2a 55 14 ff <0f> 0b 83 05 ad 2a 68 04 01 48 83 c4 18 5b 41 5c 41 5d 41 5e 41 Fixes: ee40fb2e1eb5b ("l2tp: protect sock pointer of struct pppol2tp_session with RCU") Signed-off-by: James Chapman Signed-off-by: David S. Miller Cc: Lee Jones Signed-off-by: Greg Kroah-Hartman commit 357fa382bb89ac3fb72cb93a4c8538251b06a759 Author: James Chapman Date: Fri Feb 23 17:45:44 2018 +0000 l2tp: don't use inet_shutdown on ppp session destroy commit 225eb26489d05c679a4c4197ffcb81c81e9dcaf4 upstream. Previously, if a ppp session was closed, we called inet_shutdown to mark the socket as unconnected such that userspace would get errors and then close the socket. This could race with userspace closing the socket. Instead, leave userspace to close the socket in its own time (our session will be detached anyway). BUG: KASAN: use-after-free in inet_shutdown+0x5d/0x1c0 Read of size 4 at addr ffff880010ea3ac0 by task syzbot_347bd5ac/8296 CPU: 3 PID: 8296 Comm: syzbot_347bd5ac Not tainted 4.16.0-rc1+ #91 Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 Call Trace: dump_stack+0x101/0x157 ? inet_shutdown+0x5d/0x1c0 print_address_description+0x78/0x260 ? inet_shutdown+0x5d/0x1c0 kasan_report+0x240/0x360 __asan_load4+0x78/0x80 inet_shutdown+0x5d/0x1c0 ? pppol2tp_show+0x80/0x80 pppol2tp_session_close+0x68/0xb0 l2tp_tunnel_closeall+0x199/0x210 ? udp_v6_flush_pending_frames+0x90/0x90 l2tp_udp_encap_destroy+0x6b/0xc0 ? l2tp_tunnel_del_work+0x2e0/0x2e0 udpv6_destroy_sock+0x8c/0x90 sk_common_release+0x47/0x190 udp_lib_close+0x15/0x20 inet_release+0x85/0xd0 inet6_release+0x43/0x60 sock_release+0x53/0x100 ? sock_alloc_file+0x260/0x260 sock_close+0x1b/0x20 __fput+0x19f/0x380 ____fput+0x1a/0x20 task_work_run+0xd2/0x110 exit_to_usermode_loop+0x18d/0x190 do_syscall_64+0x389/0x3b0 entry_SYSCALL_64_after_hwframe+0x26/0x9b RIP: 0033:0x7fe240a45259 RSP: 002b:00007fe241132df8 EFLAGS: 00000297 ORIG_RAX: 0000000000000003 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fe240a45259 RDX: 00007fe240a45259 RSI: 0000000000000000 RDI: 00000000000000a5 RBP: 00007fe241132e20 R08: 00007fe241133700 R09: 0000000000000000 R10: 00007fe241133700 R11: 0000000000000297 R12: 0000000000000000 R13: 00007ffc49aff84f R14: 0000000000000000 R15: 00007fe241141040 Allocated by task 8331: save_stack+0x43/0xd0 kasan_kmalloc+0xad/0xe0 kasan_slab_alloc+0x12/0x20 kmem_cache_alloc+0x144/0x3e0 sock_alloc_inode+0x22/0x130 alloc_inode+0x3d/0xf0 new_inode_pseudo+0x1c/0x90 sock_alloc+0x30/0x110 __sock_create+0xaa/0x4c0 SyS_socket+0xbe/0x130 do_syscall_64+0x128/0x3b0 entry_SYSCALL_64_after_hwframe+0x26/0x9b Freed by task 8314: save_stack+0x43/0xd0 __kasan_slab_free+0x11a/0x170 kasan_slab_free+0xe/0x10 kmem_cache_free+0x88/0x2b0 sock_destroy_inode+0x49/0x50 destroy_inode+0x77/0xb0 evict+0x285/0x340 iput+0x429/0x530 dentry_unlink_inode+0x28c/0x2c0 __dentry_kill+0x1e3/0x2f0 dput.part.21+0x500/0x560 dput+0x24/0x30 __fput+0x2aa/0x380 ____fput+0x1a/0x20 task_work_run+0xd2/0x110 exit_to_usermode_loop+0x18d/0x190 do_syscall_64+0x389/0x3b0 entry_SYSCALL_64_after_hwframe+0x26/0x9b Fixes: fd558d186df2c ("l2tp: Split pppol2tp patch into separate l2tp and ppp parts") Signed-off-by: James Chapman Signed-off-by: David S. Miller Cc: Lee Jones Signed-off-by: Greg Kroah-Hartman commit 0dc2fca8e4f9ac4a40e8424a10163369cca0cc06 Author: Zhang Yi Date: Wed Jun 1 17:27:17 2022 +0800 ext4: add reserved GDT blocks check commit b55c3cd102a6f48b90e61c44f7f3dda8c290c694 upstream. We capture a NULL pointer issue when resizing a corrupt ext4 image which is freshly clear resize_inode feature (not run e2fsck). It could be simply reproduced by following steps. The problem is because of the resize_inode feature was cleared, and it will convert the filesystem to meta_bg mode in ext4_resize_fs(), but the es->s_reserved_gdt_blocks was not reduced to zero, so could we mistakenly call reserve_backup_gdb() and passing an uninitialized resize_inode to it when adding new group descriptors. mkfs.ext4 /dev/sda 3G tune2fs -O ^resize_inode /dev/sda #forget to run requested e2fsck mount /dev/sda /mnt resize2fs /dev/sda 8G ======== BUG: kernel NULL pointer dereference, address: 0000000000000028 CPU: 19 PID: 3243 Comm: resize2fs Not tainted 5.18.0-rc7-00001-gfde086c5ebfd #748 ... RIP: 0010:ext4_flex_group_add+0xe08/0x2570 ... Call Trace: ext4_resize_fs+0xbec/0x1660 __ext4_ioctl+0x1749/0x24e0 ext4_ioctl+0x12/0x20 __x64_sys_ioctl+0xa6/0x110 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f2dd739617b ======== The fix is simple, add a check in ext4_resize_begin() to make sure that the es->s_reserved_gdt_blocks is zero when the resize_inode feature is disabled. Cc: stable@kernel.org Signed-off-by: Zhang Yi Reviewed-by: Ritesh Harjani Reviewed-by: Jan Kara Link: https://lore.kernel.org/r/20220601092717.763694-1-yi.zhang@huawei.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman commit 984ceb2fc8f2b8cd7d38f9702ba5586c30804153 Author: Ding Xiang Date: Mon May 30 18:00:47 2022 +0800 ext4: make variable "count" signed commit bc75a6eb856cb1507fa907bf6c1eda91b3fef52f upstream. Since dx_make_map() may return -EFSCORRUPTED now, so change "count" to be a signed integer so we can correctly check for an error code returned by dx_make_map(). Fixes: 46c116b920eb ("ext4: verify dir block before splitting it") Cc: stable@kernel.org Signed-off-by: Ding Xiang Link: https://lore.kernel.org/r/20220530100047.537598-1-dingxiang@cmss.chinamobile.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman commit 6880fb2e64331b9fdc85d3f32b1d7e81ad8703f1 Author: Baokun Li Date: Sat May 28 19:00:15 2022 +0800 ext4: fix bug_on ext4_mb_use_inode_pa commit a08f789d2ab5242c07e716baf9a835725046be89 upstream. Hulk Robot reported a BUG_ON: ================================================================== kernel BUG at fs/ext4/mballoc.c:3211! [...] RIP: 0010:ext4_mb_mark_diskspace_used.cold+0x85/0x136f [...] Call Trace: ext4_mb_new_blocks+0x9df/0x5d30 ext4_ext_map_blocks+0x1803/0x4d80 ext4_map_blocks+0x3a4/0x1a10 ext4_writepages+0x126d/0x2c30 do_writepages+0x7f/0x1b0 __filemap_fdatawrite_range+0x285/0x3b0 file_write_and_wait_range+0xb1/0x140 ext4_sync_file+0x1aa/0xca0 vfs_fsync_range+0xfb/0x260 do_fsync+0x48/0xa0 [...] ================================================================== Above issue may happen as follows: ------------------------------------- do_fsync vfs_fsync_range ext4_sync_file file_write_and_wait_range __filemap_fdatawrite_range do_writepages ext4_writepages mpage_map_and_submit_extent mpage_map_one_extent ext4_map_blocks ext4_mb_new_blocks ext4_mb_normalize_request >>> start + size <= ac->ac_o_ex.fe_logical ext4_mb_regular_allocator ext4_mb_simple_scan_group ext4_mb_use_best_found ext4_mb_new_preallocation ext4_mb_new_inode_pa ext4_mb_use_inode_pa >>> set ac->ac_b_ex.fe_len <= 0 ext4_mb_mark_diskspace_used >>> BUG_ON(ac->ac_b_ex.fe_len <= 0); we can easily reproduce this problem with the following commands: `fallocate -l100M disk` `mkfs.ext4 -b 1024 -g 256 disk` `mount disk /mnt` `fsstress -d /mnt -l 0 -n 1000 -p 1` The size must be smaller than or equal to EXT4_BLOCKS_PER_GROUP. Therefore, "start + size <= ac->ac_o_ex.fe_logical" may occur when the size is truncated. So start should be the start position of the group where ac_o_ex.fe_logical is located after alignment. In addition, when the value of fe_logical or EXT4_BLOCKS_PER_GROUP is very large, the value calculated by start_off is more accurate. Cc: stable@kernel.org Fixes: cd648b8a8fd5 ("ext4: trim allocation requests to group size") Reported-by: Hulk Robot Signed-off-by: Baokun Li Reviewed-by: Ritesh Harjani Link: https://lore.kernel.org/r/20220528110017.354175-2-libaokun1@huawei.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman commit 93b5acac36cd1318ace2cd422267de7e2ad51fe9 Author: Ilpo Järvinen Date: Fri May 20 13:35:41 2022 +0300 serial: 8250: Store to lsr_save_flags after lsr read commit be03b0651ffd8bab69dfd574c6818b446c0753ce upstream. Not all LSR register flags are preserved across reads. Therefore, LSR readers must store the non-preserved bits into lsr_save_flags. This fix was initially mixed into feature commit f6f586102add ("serial: 8250: Handle UART without interrupt on TEMT using em485"). However, that feature change had a flaw and it was reverted to make room for simpler approach providing the same feature. The embedded fix got reverted with the feature change. Re-add the lsr_save_flags fix and properly mark it's a fix. Link: https://lore.kernel.org/all/1d6c31d-d194-9e6a-ddf9-5f29af829f3@linux.intel.com/T/#m1737eef986bd20cf19593e344cebd7b0244945fc Fixes: e490c9144cfa ("tty: Add software emulated RS485 support for 8250") Cc: stable Acked-by: Uwe Kleine-König Signed-off-by: Uwe Kleine-König Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/f4d774be-1437-a550-8334-19d8722ab98c@linux.intel.com Signed-off-by: Greg Kroah-Hartman commit d85e4e6284a91aa2d1ab004e9d84b9c09b4aa203 Author: Miaoqian Lin Date: Fri Jun 3 18:02:44 2022 +0400 usb: gadget: lpc32xx_udc: Fix refcount leak in lpc32xx_udc_probe commit 4757c9ade34178b351580133771f510b5ffcf9c8 upstream. of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. of_node_put() will check NULL pointer. Fixes: 24a28e428351 ("USB: gadget driver for LPC32xx") Cc: stable Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220603140246.64529-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman commit 85203393d81b56a4d03a1488ae073c88f5b4dd15 Author: Robert Eckelmann Date: Sat May 21 23:08:08 2022 +0900 USB: serial: io_ti: add Agilent E5805A support commit 908e698f2149c3d6a67d9ae15c75545a3f392559 upstream. Add support for Agilent E5805A (rebranded ION Edgeport/4) to io_ti. Signed-off-by: Robert Eckelmann Link: https://lore.kernel.org/r/20220521230808.30931eca@octoberrain Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman commit e69721e7f44299109bc1106145daa953bc861f1d Author: Slark Xiao Date: Wed Jun 1 11:47:40 2022 +0800 USB: serial: option: add support for Cinterion MV31 with new baseline commit 158f7585bfcea4aae0ad4128d032a80fec550df1 upstream. Adding support for Cinterion device MV31 with Qualcomm new baseline. Use different PIDs to separate it from previous base line products. All interfaces settings keep same as previous. Below is test evidence: T: Bus=03 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 6 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1e2d ProdID=00b8 Rev=04.14 S: Manufacturer=Cinterion S: Product=Cinterion PID 0x00B8 USB Mobile Broadband S: SerialNumber=90418e79 C: #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#=0x0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=0e Prot=00 Driver=cdc_mbim I: If#=0x1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=40 Driver=option I: If#=0x3 Alt= 0 #EPs= 1 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) I: If#=0x4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=60 Driver=option I: If#=0x5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=30 Driver=option T: Bus=03 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1e2d ProdID=00b9 Rev=04.14 S: Manufacturer=Cinterion S: Product=Cinterion PID 0x00B9 USB Mobile Broadband S: SerialNumber=90418e79 C: #Ifs= 4 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#=0x0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=50 Driver=qmi_wwan I: If#=0x1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=40 Driver=option I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=60 Driver=option I: If#=0x3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=30 Driver=option For PID 00b8, interface 3 is GNSS port which don't use serial driver. Signed-off-by: Slark Xiao Link: https://lore.kernel.org/r/20220601034740.5438-1-slark_xiao@163.com [ johan: rename defines using a "2" infix ] Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman commit e8cc56b8d7e2017fd96e7fd89812961e0bcb0dd3 Author: Ian Abbott Date: Tue Jun 7 18:18:19 2022 +0100 comedi: vmk80xx: fix expression for tx buffer size commit 242439f7e279d86b3f73b5de724bc67b2f8aeb07 upstream. The expression for setting the size of the allocated bulk TX buffer (`devpriv->usb_tx_buf`) is calling `usb_endpoint_maxp(devpriv->ep_rx)`, which is using the wrong endpoint (should be `devpriv->ep_tx`). Fix it. Fixes: a23461c47482 ("comedi: vmk80xx: fix transfer-buffer overflow") Cc: Johan Hovold Cc: stable@vger.kernel.org # 4.9+ Reviewed-by: Johan Hovold Signed-off-by: Ian Abbott Link: https://lore.kernel.org/r/20220607171819.4121-1-abbotti@mev.co.uk Signed-off-by: Greg Kroah-Hartman commit 3fe0d94cec04b615bd76a18d8e5bb86fd5aa5f73 Author: zijun_hu Date: Sat Sep 16 01:59:41 2017 +0800 irqchip/gic-v3: Iterate over possible CPUs by for_each_possible_cpu() [ Upstream commit 3fad4cdac235c5b13227d0c09854c689ae62c70b ] get_cpu_number() doesn't use existing helper to iterate over possible CPUs, It will cause an error in case of discontinuous @cpu_possible_mask such as 0b11110001, which can result from a core having failed to come up on a SMP machine. Fixed by using existing helper for_each_possible_cpu(). Signed-off-by: zijun_hu Signed-off-by: Marc Zyngier Signed-off-by: Sasha Levin commit 87da903ce632d5689bef66d56ee5dae700d82104 Author: Miaoqian Lin Date: Wed Jun 1 12:09:25 2022 +0400 irqchip/gic/realview: Fix refcount leak in realview_gic_of_init [ Upstream commit f4b98e314888cc51486421bcf6d52852452ea48b ] of_find_matching_node_and_match() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: 82b0a434b436 ("irqchip/gic/realview: Support more RealView DCC variants") Signed-off-by: Miaoqian Lin Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220601080930.31005-2-linmq006@gmail.com Signed-off-by: Sasha Levin commit e804514e868013b69c9d01a7ed370a69442a82e2 Author: Miaoqian Lin Date: Wed Jun 1 16:30:26 2022 +0400 misc: atmel-ssc: Fix IRQ check in ssc_probe [ Upstream commit 1c245358ce0b13669f6d1625f7a4e05c41f28980 ] platform_get_irq() returns negative error number instead 0 on failure. And the doc of platform_get_irq() provides a usage example: int irq = platform_get_irq(pdev, 0); if (irq < 0) return irq; Fix the check of return value to catch errors correctly. Fixes: eb1f2930609b ("Driver for the Atmel on-chip SSC on AT32AP and AT91") Reviewed-by: Claudiu Beznea Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220601123026.7119-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit a298e888e86c8c24d67ccecbc58728a1f261d996 Author: Trond Myklebust Date: Tue May 31 11:03:06 2022 -0400 pNFS: Don't keep retrying if the server replied NFS4ERR_LAYOUTUNAVAILABLE [ Upstream commit fe44fb23d6ccde4c914c44ef74ab8d9d9ba02bea ] If the server tells us that a pNFS layout is not available for a specific file, then we should not keep pounding it with further layoutget requests. Fixes: 183d9e7b112a ("pnfs: rework LAYOUTGET retry handling") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin commit 1e0dacdd936695269de5c3256ce5c57871b13a8d Author: Jason A. Donenfeld Date: Mon Jun 13 22:07:01 2022 -0400 random: credit cpu and bootloader seeds by default [ Upstream commit 846bb97e131d7938847963cca00657c995b1fce1 ] This commit changes the default Kconfig values of RANDOM_TRUST_CPU and RANDOM_TRUST_BOOTLOADER to be Y by default. It does not change any existing configs or change any kernel behavior. The reason for this is several fold. As background, I recently had an email thread with the kernel maintainers of Fedora/RHEL, Debian, Ubuntu, Gentoo, Arch, NixOS, Alpine, SUSE, and Void as recipients. I noted that some distros trust RDRAND, some trust EFI, and some trust both, and I asked why or why not. There wasn't really much of a "debate" but rather an interesting discussion of what the historical reasons have been for this, and it came up that some distros just missed the introduction of the bootloader Kconfig knob, while another didn't want to enable it until there was a boot time switch to turn it off for more concerned users (which has since been added). The result of the rather uneventful discussion is that every major Linux distro enables these two options by default. While I didn't have really too strong of an opinion going into this thread -- and I mostly wanted to learn what the distros' thinking was one way or another -- ultimately I think their choice was a decent enough one for a default option (which can be disabled at boot time). I'll try to summarize the pros and cons: Pros: - The RNG machinery gets initialized super quickly, and there's no messing around with subsequent blocking behavior. - The bootloader mechanism is used by kexec in order for the prior kernel to initialize the RNG of the next kernel, which increases the entropy available to early boot daemons of the next kernel. - Previous objections related to backdoors centered around Dual_EC_DRBG-like kleptographic systems, in which observing some amount of the output stream enables an adversary holding the right key to determine the entire output stream. This used to be a partially justified concern, because RDRAND output was mixed into the output stream in varying ways, some of which may have lacked pre-image resistance (e.g. XOR or an LFSR). But this is no longer the case. Now, all usage of RDRAND and bootloader seeds go through a cryptographic hash function. This means that the CPU would have to compute a hash pre-image, which is not considered to be feasible (otherwise the hash function would be terribly broken). - More generally, if the CPU is backdoored, the RNG is probably not the realistic vector of choice for an attacker. - These CPU or bootloader seeds are far from being the only source of entropy. Rather, there is generally a pretty huge amount of entropy, not all of which is credited, especially on CPUs that support instructions like RDRAND. In other words, assuming RDRAND outputs all zeros, an attacker would *still* have to accurately model every single other entropy source also in use. - The RNG now reseeds itself quite rapidly during boot, starting at 2 seconds, then 4, then 8, then 16, and so forth, so that other sources of entropy get used without much delay. - Paranoid users can set random.trust_{cpu,bootloader}=no in the kernel command line, and paranoid system builders can set the Kconfig options to N, so there's no reduction or restriction of optionality. - It's a practical default. - All the distros have it set this way. Microsoft and Apple trust it too. Bandwagon. Cons: - RDRAND *could* still be backdoored with something like a fixed key or limited space serial number seed or another indexable scheme like that. (However, it's hard to imagine threat models where the CPU is backdoored like this, yet people are still okay making *any* computations with it or connecting it to networks, etc.) - RDRAND *could* be defective, rather than backdoored, and produce garbage that is in one way or another insufficient for crypto. - Suggesting a *reduction* in paranoia, as this commit effectively does, may cause some to question my personal integrity as a "security person". - Bootloader seeds and RDRAND are generally very difficult if not all together impossible to audit. Keep in mind that this doesn't actually change any behavior. This is just a change in the default Kconfig value. The distros already are shipping kernels that set things this way. Ard made an additional argument in [1]: We're at the mercy of firmware and micro-architecture anyway, given that we are also relying on it to ensure that every instruction in the kernel's executable image has been faithfully copied to memory, and that the CPU implements those instructions as documented. So I don't think firmware or ISA bugs related to RNGs deserve special treatment - if they are broken, we should quirk around them like we usually do. So enabling these by default is a step in the right direction IMHO. In [2], Phil pointed out that having this disabled masked a bug that CI otherwise would have caught: A clean 5.15.45 boots cleanly, whereas a downstream kernel shows the static key warning (but it does go on to boot). The significant difference is that our defconfigs set CONFIG_RANDOM_TRUST_BOOTLOADER=y defining that on top of multi_v7_defconfig demonstrates the issue on a clean 5.15.45. Conversely, not setting that option in a downstream kernel build avoids the warning [1] https://lore.kernel.org/lkml/CAMj1kXGi+ieviFjXv9zQBSaGyyzeGW_VpMpTLJK8PJb2QHEQ-w@mail.gmail.com/ [2] https://lore.kernel.org/lkml/c47c42e3-1d56-5859-a6ad-976a1a3381c6@raspberrypi.com/ Cc: Theodore Ts'o Reviewed-by: Ard Biesheuvel Signed-off-by: Jason A. Donenfeld Signed-off-by: Sasha Levin commit 70b6e2beef367821f0a0e6d5270c8ca96b735ac2 Author: Chen Lin Date: Wed Jun 8 20:46:53 2022 +0800 net: ethernet: mtk_eth_soc: fix misuse of mem alloc interface netdev[napi]_alloc_frag [ Upstream commit 2f2c0d2919a14002760f89f4e02960c735a316d2 ] When rx_flag == MTK_RX_FLAGS_HWLRO, rx_data_len = MTK_MAX_LRO_RX_LENGTH(4096 * 3) > PAGE_SIZE. netdev_alloc_frag is for alloction of page fragment only. Reference to other drivers and Documentation/vm/page_frags.rst Branch to use __get_free_pages when ring->frag_size > PAGE_SIZE. Signed-off-by: Chen Lin Link: https://lore.kernel.org/r/1654692413-2598-1-git-send-email-chen45464546@163.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin commit 2cf73c7cb6125083408d77f43d0e84d86aed0000 Author: Wang Yufen Date: Tue Jun 7 20:00:28 2022 +0800 ipv6: Fix signed integer overflow in l2tp_ip6_sendmsg [ Upstream commit f638a84afef3dfe10554c51820c16e39a278c915 ] When len >= INT_MAX - transhdrlen, ulen = len + transhdrlen will be overflow. To fix, we can follow what udpv6 does and subtract the transhdrlen from the max. Signed-off-by: Wang Yufen Link: https://lore.kernel.org/r/20220607120028.845916-2-wangyufen@huawei.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin commit 1eb0afecfb9cd0f38424b82bd9aaa542310934ee Author: Xiaohui Zhang Date: Tue Jun 7 16:32:30 2022 +0800 nfc: nfcmrvl: Fix memory leak in nfcmrvl_play_deferred [ Upstream commit 8a4d480702b71184fabcf379b80bf7539716752e ] Similar to the handling of play_deferred in commit 19cfe912c37b ("Bluetooth: btusb: Fix memory leak in play_deferred"), we thought a patch might be needed here as well. Currently usb_submit_urb is called directly to submit deferred tx urbs after unanchor them. So the usb_giveback_urb_bh would failed to unref it in usb_unanchor_urb and cause memory leak. Put those urbs in tx_anchor to avoid the leak, and also fix the error handling. Signed-off-by: Xiaohui Zhang Acked-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220607083230.6182-1-xiaohuizhang@ruc.edu.cn Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin commit 78b34fd0d7546bf9daae995855609363be770343 Author: chengkaitao Date: Thu Jun 2 08:55:42 2022 +0800 virtio-mmio: fix missing put_device() when vm_cmdline_parent registration failed [ Upstream commit a58a7f97ba11391d2d0d408e0b24f38d86ae748e ] The reference must be released when device_register(&vm_cmdline_parent) failed. Add the corresponding 'put_device()' in the error handling path. Signed-off-by: chengkaitao Message-Id: <20220602005542.16489-1-chengkaitao@didiglobal.com> Signed-off-by: Michael S. Tsirkin Acked-by: Jason Wang Signed-off-by: Sasha Levin commit 228ecc2ae817bb5849a45d1af61e504de08ba6dc Author: James Smart Date: Fri Jun 3 10:43:26 2022 -0700 scsi: lpfc: Fix port stuck in bypassed state after LIP in PT2PT topology [ Upstream commit 336d63615466b4c06b9401c987813fd19bdde39b ] After issuing a LIP, a specific target vendor does not ACC the FLOGI that lpfc sends. However, it does send its own FLOGI that lpfc ACCs. The target then establishes the port IDs by sending a PLOGI. lpfc PLOGI_ACCs and starts the RPI registration for DID 0x000001. The target then sends a LOGO to the fabric DID. lpfc is currently treating the LOGO from the fabric DID as a link down and cleans up all the ndlps. The ndlp for DID 0x000001 is put back into NPR and discovery stops, leaving the port in stuck in bypassed mode. Change lpfc behavior such that if a LOGO is received for the fabric DID in PT2PT topology skip the lpfc_linkdown_port() routine and just move the fabric DID back to NPR. Link: https://lore.kernel.org/r/20220603174329.63777-7-jsmart2021@gmail.com Co-developed-by: Justin Tee Signed-off-by: Justin Tee Signed-off-by: James Smart Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin commit 81ed95046f6b6a6b9ca827dcfeefb7e672e64322 Author: Wentao Wang Date: Thu Jun 2 08:57:00 2022 +0000 scsi: vmw_pvscsi: Expand vcpuHint to 16 bits [ Upstream commit cf71d59c2eceadfcde0fb52e237990a0909880d7 ] vcpuHint has been expanded to 16 bit on host to enable routing to more CPUs. Guest side should align with the change. This change has been tested with hosts with 8-bit and 16-bit vcpuHint, on both platforms host side can get correct value. Link: https://lore.kernel.org/r/EF35F4D5-5DCC-42C5-BCC4-29DF1729B24C@vmware.com Signed-off-by: Wentao Wang Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin commit d9968d5ffdfed2b15812e93db4f0ecd232224d5b Author: Adam Ford Date: Thu May 26 13:21:28 2022 -0500 ASoC: wm8962: Fix suspend while playing music [ Upstream commit d1f5272c0f7d2e53c6f2480f46725442776f5f78 ] If the audio CODEC is playing sound when the system is suspended, it can be left in a state which throws the following error: wm8962 3-001a: ASoC: error at soc_component_read_no_lock on wm8962.3-001a: -16 Once this error has occurred, the audio will not work again until rebooted. Fix this by configuring SET_SYSTEM_SLEEP_PM_OPS. Signed-off-by: Adam Ford Acked-by: Charles Keepax Link: https://lore.kernel.org/r/20220526182129.538472-1-aford173@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit ca4693e6e06e4fd2b240c0fec47aa2498c94848e Author: Sergey Shtylyov Date: Sat May 21 23:34:10 2022 +0300 ata: libata-core: fix NULL pointer deref in ata_host_alloc_pinfo() [ Upstream commit bf476fe22aa1851bab4728e0c49025a6a0bea307 ] In an unlikely (and probably wrong?) case that the 'ppi' parameter of ata_host_alloc_pinfo() points to an array starting with a NULL pointer, there's going to be a kernel oops as the 'pi' local variable won't get reassigned from the initial value of NULL. Initialize 'pi' instead to '&ata_dummy_port_info' to fix the possible kernel oops for good... Found by Linux Verification Center (linuxtesting.org) with the SVACE static analysis tool. Signed-off-by: Sergey Shtylyov Signed-off-by: Damien Le Moal Signed-off-by: Sasha Levin commit ef78edd4ef7ac0fe0c04f999bbba76dbac6567ff Author: Charles Keepax Date: Thu Jun 2 17:21:18 2022 +0100 ASoC: cs42l56: Correct typo in minimum level for SX volume controls [ Upstream commit a8928ada9b96944cadd8b65d191e33199fd38782 ] A couple of the SX volume controls specify 0x84 as the lowest volume value, however the correct value from the datasheet is 0x44. The datasheet don't include spaces in the value it displays as binary so this was almost certainly just a typo reading 1000100. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-6-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit c30f6af44df79f02c6309156bf03337cf4c456e5 Author: Charles Keepax Date: Thu Jun 2 17:21:17 2022 +0100 ASoC: cs42l52: Correct TLV for Bypass Volume [ Upstream commit 91e90c712fade0b69cdff7cc6512f6099bd18ae5 ] The Bypass Volume is accidentally using a -6dB minimum TLV rather than the correct -60dB minimum. Add a new TLV to correct this. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-5-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit 01bc8d947fb1a107d1f33dce33bcc3d5bd173a9e Author: Charles Keepax Date: Thu Jun 2 17:21:16 2022 +0100 ASoC: cs53l30: Correct number of volume levels on SX controls [ Upstream commit 7fbd6dd68127927e844912a16741016d432a0737 ] This driver specified the maximum value rather than the number of volume levels on the SX controls, this is incorrect, so correct them. Reported-by: David Rhodes Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-4-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit a4cb3d1d2b8fdba52b168cbbccd6a1949bfbcec9 Author: Charles Keepax Date: Thu Jun 2 17:21:14 2022 +0100 ASoC: cs42l52: Fix TLV scales for mixer controls [ Upstream commit 8bf5aabf524eec61013e506f764a0b2652dc5665 ] The datasheet specifies the range of the mixer volumes as between -51.5dB and 12dB with a 0.5dB step. Update the TLVs for this. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-2-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit 0b8e19b40c27ea40c5628c46c87eba4a61a3c68f Author: Jason A. Donenfeld Date: Tue Jun 7 17:04:38 2022 +0200 random: account for arch randomness in bits commit 77fc95f8c0dc9e1f8e620ec14d2fb65028fb7adc upstream. Rather than accounting in bytes and multiplying (shifting), we can just account in bits and avoid the shift. The main motivation for this is there are other patches in flux that expand this code a bit, and avoiding the duplication of "* 8" everywhere makes things a bit clearer. Cc: stable@vger.kernel.org Fixes: 12e45a2a6308 ("random: credit architectural init the exact amount") Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 855aac499edbdba5feceadbcd511e4479795d491 Author: Jason A. Donenfeld Date: Tue Jun 7 17:00:16 2022 +0200 random: mark bootloader randomness code as __init commit 39e0f991a62ed5efabd20711a7b6e7da92603170 upstream. add_bootloader_randomness() and the variables it touches are only used during __init and not after, so mark these as __init. At the same time, unexport this, since it's only called by other __init code that's built-in. Cc: stable@vger.kernel.org Fixes: 428826f5358c ("fdt: add support for rng-seed") Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit dc1485143685c387cca6425bef8454570018daeb Author: Jason A. Donenfeld Date: Tue Jun 7 09:44:07 2022 +0200 random: avoid checking crng_ready() twice in random_init() commit 9b29b6b20376ab64e1b043df6301d8a92378e631 upstream. The current flow expands to: if (crng_ready()) ... else if (...) if (!crng_ready()) ... The second crng_ready() call is redundant, but can't so easily be optimized out by the compiler. This commit simplifies that to: if (crng_ready() ... else if (...) ... Fixes: 560181c27b58 ("random: move initialization functions out of hot pages") Cc: stable@vger.kernel.org Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ab62f0f9d32a8839077eec96cc83b5a20c29774c Author: Nicolai Stange Date: Thu Jun 2 22:22:32 2022 +0200 crypto: drbg - make reseeding from get_random_bytes() synchronous commit 074bcd4000e0d812bc253f86fedc40f81ed59ccc upstream. get_random_bytes() usually hasn't full entropy available by the time DRBG instances are first getting seeded from it during boot. Thus, the DRBG implementation registers random_ready_callbacks which would in turn schedule some work for reseeding the DRBGs once get_random_bytes() has sufficient entropy available. For reference, the relevant history around handling DRBG (re)seeding in the context of a not yet fully seeded get_random_bytes() is: commit 16b369a91d0d ("random: Blocking API for accessing nonblocking_pool") commit 4c7879907edd ("crypto: drbg - add async seeding operation") commit 205a525c3342 ("random: Add callback API for random pool readiness") commit 57225e679788 ("crypto: drbg - Use callback API for random readiness") commit c2719503f5e1 ("random: Remove kernel blocking API") However, some time later, the initialization state of get_random_bytes() has been made queryable via rng_is_initialized() introduced with commit 9a47249d444d ("random: Make crng state queryable"). This primitive now allows for streamlining the DRBG reseeding from get_random_bytes() by replacing that aforementioned asynchronous work scheduling from random_ready_callbacks with some simpler, synchronous code in drbg_generate() next to the related logic already present therein. Apart from improving overall code readability, this change will also enable DRBG users to rely on wait_for_random_bytes() for ensuring that the initial seeding has completed, if desired. The previous patches already laid the grounds by making drbg_seed() to record at each DRBG instance whether it was being seeded at a time when rng_is_initialized() still had been false as indicated by ->seeded == DRBG_SEED_STATE_PARTIAL. All that remains to be done now is to make drbg_generate() check for this condition, determine whether rng_is_initialized() has flipped to true in the meanwhile and invoke a reseed from get_random_bytes() if so. Make this move: - rename the former drbg_async_seed() work handler, i.e. the one in charge of reseeding a DRBG instance from get_random_bytes(), to "drbg_seed_from_random()", - change its signature as appropriate, i.e. make it take a struct drbg_state rather than a work_struct and change its return type from "void" to "int" in order to allow for passing error information from e.g. its __drbg_seed() invocation onwards to callers, - make drbg_generate() invoke this drbg_seed_from_random() once it encounters a DRBG instance with ->seeded == DRBG_SEED_STATE_PARTIAL by the time rng_is_initialized() has flipped to true and - prune everything related to the former, random_ready_callback based mechanism. As drbg_seed_from_random() is now getting invoked from drbg_generate() with the ->drbg_mutex being held, it must not attempt to recursively grab it once again. Remove the corresponding mutex operations from what is now drbg_seed_from_random(). Furthermore, as drbg_seed_from_random() can now report errors directly to its caller, there's no need for it to temporarily switch the DRBG's ->seeded state to DRBG_SEED_STATE_UNSEEDED so that a failure of the subsequently invoked __drbg_seed() will get signaled to drbg_generate(). Don't do it then. Signed-off-by: Nicolai Stange Signed-off-by: Herbert Xu [Jason: for stable, undid the modifications for the backport of 5acd3548.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit ba6a98f8c776826158e62a7f5798a73bd4dbdc77 Author: Stephan Müller Date: Sun Jun 7 15:20:26 2020 +0200 crypto: drbg - always try to free Jitter RNG instance commit 819966c06b759022e9932f328284314d9272b9f3 upstream. The Jitter RNG is unconditionally allocated as a seed source follwoing the patch 97f2650e5040. Thus, the instance must always be deallocated. Reported-by: syzbot+2e635807decef724a1fa@syzkaller.appspotmail.com Fixes: 97f2650e5040 ("crypto: drbg - always seeded with SP800-90B ...") Signed-off-by: Stephan Mueller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit f9d953c3efcb766f8bd04b9df8de1f82f0c201b1 Author: Nicolai Stange Date: Thu Jun 2 22:22:31 2022 +0200 crypto: drbg - move dynamic ->reseed_threshold adjustments to __drbg_seed() commit 262d83a4290c331cd4f617a457408bdb82fbb738 upstream. Since commit 42ea507fae1a ("crypto: drbg - reseed often if seedsource is degraded"), the maximum seed lifetime represented by ->reseed_threshold gets temporarily lowered if the get_random_bytes() source cannot provide sufficient entropy yet, as is common during boot, and restored back to the original value again once that has changed. More specifically, if the add_random_ready_callback() invoked from drbg_prepare_hrng() in the course of DRBG instantiation does not return -EALREADY, that is, if get_random_bytes() has not been fully initialized at this point yet, drbg_prepare_hrng() will lower ->reseed_threshold to a value of 50. The drbg_async_seed() scheduled from said random_ready_callback will eventually restore the original value. A future patch will replace the random_ready_callback based notification mechanism and thus, there will be no add_random_ready_callback() return value anymore which could get compared to -EALREADY. However, there's __drbg_seed() which gets invoked in the course of both, the DRBG instantiation as well as the eventual reseeding from get_random_bytes() in aforementioned drbg_async_seed(), if any. Moreover, it knows about the get_random_bytes() initialization state by the time the seed data had been obtained from it: the new_seed_state argument introduced with the previous patch would get set to DRBG_SEED_STATE_PARTIAL in case get_random_bytes() had not been fully initialized yet and to DRBG_SEED_STATE_FULL otherwise. Thus, __drbg_seed() provides a convenient alternative for managing that ->reseed_threshold lowering and restoring at a central place. Move all ->reseed_threshold adjustment code from drbg_prepare_hrng() and drbg_async_seed() respectively to __drbg_seed(). Make __drbg_seed() lower the ->reseed_threshold to 50 in case its new_seed_state argument equals DRBG_SEED_STATE_PARTIAL and let it restore the original value otherwise. There is no change in behaviour. Signed-off-by: Nicolai Stange Reviewed-by: Stephan Müller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit d0ff784dcaa3ef6b39e13ff7c52d18c469c7253c Author: Nicolai Stange Date: Thu Jun 2 22:22:30 2022 +0200 crypto: drbg - track whether DRBG was seeded with !rng_is_initialized() commit 2bcd25443868aa8863779a6ebc6c9319633025d2 upstream. Currently, the DRBG implementation schedules asynchronous works from random_ready_callbacks for reseeding the DRBG instances with output from get_random_bytes() once the latter has sufficient entropy available. However, as the get_random_bytes() initialization state can get queried by means of rng_is_initialized() now, there is no real need for this asynchronous reseeding logic anymore and it's better to keep things simple by doing it synchronously when needed instead, i.e. from drbg_generate() once rng_is_initialized() has flipped to true. Of course, for this to work, drbg_generate() would need some means by which it can tell whether or not rng_is_initialized() has flipped to true since the last seeding from get_random_bytes(). Or equivalently, whether or not the last seed from get_random_bytes() has happened when rng_is_initialized() was still evaluating to false. As it currently stands, enum drbg_seed_state allows for the representation of two different DRBG seeding states: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. The former makes drbg_generate() to invoke a full reseeding operation involving both, the rather expensive jitterentropy as well as the get_random_bytes() randomness sources. The DRBG_SEED_STATE_FULL state on the other hand implies that no reseeding at all is required for a !->pr DRBG variant. Introduce the new DRBG_SEED_STATE_PARTIAL state to enum drbg_seed_state for representing the condition that a DRBG was being seeded when rng_is_initialized() had still been false. In particular, this new state implies that - the given DRBG instance has been fully seeded from the jitterentropy source (if enabled) - and drbg_generate() is supposed to reseed from get_random_bytes() *only* once rng_is_initialized() turns to true. Up to now, the __drbg_seed() helper used to set the given DRBG instance's ->seeded state to constant DRBG_SEED_STATE_FULL. Introduce a new argument allowing for the specification of the to be written ->seeded value instead. Make the first of its two callers, drbg_seed(), determine the appropriate value based on rng_is_initialized(). The remaining caller, drbg_async_seed(), is known to get invoked only once rng_is_initialized() is true, hence let it pass constant DRBG_SEED_STATE_FULL for the new argument to __drbg_seed(). There is no change in behaviour, except for that the pr_devel() in drbg_generate() would now report "unseeded" for ->pr DRBG instances which had last been seeded when rng_is_initialized() was still evaluating to false. Signed-off-by: Nicolai Stange Reviewed-by: Stephan Müller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit d858a3b1aca6e7e7e4f5ba4d144400105f99db50 Author: Nicolai Stange Date: Thu Jun 2 22:22:29 2022 +0200 crypto: drbg - prepare for more fine-grained tracking of seeding state commit ce8ce31b2c5c8b18667784b8c515650c65d57b4e upstream. There are two different randomness sources the DRBGs are getting seeded from, namely the jitterentropy source (if enabled) and get_random_bytes(). At initial DRBG seeding time during boot, the latter might not have collected sufficient entropy for seeding itself yet and thus, the DRBG implementation schedules a reseed work from a random_ready_callback once that has happened. This is particularly important for the !->pr DRBG instances, for which (almost) no further reseeds are getting triggered during their lifetime. Because collecting data from the jitterentropy source is a rather expensive operation, the aforementioned asynchronously scheduled reseed work restricts itself to get_random_bytes() only. That is, it in some sense amends the initial DRBG seed derived from jitterentropy output at full (estimated) entropy with fresh randomness obtained from get_random_bytes() once that has been seeded with sufficient entropy itself. With the advent of rng_is_initialized(), there is no real need for doing the reseed operation from an asynchronously scheduled work anymore and a subsequent patch will make it synchronous by moving it next to related logic already present in drbg_generate(). However, for tracking whether a full reseed including the jitterentropy source is required or a "partial" reseed involving only get_random_bytes() would be sufficient already, the boolean struct drbg_state's ->seeded member must become a tristate value. Prepare for this by introducing the new enum drbg_seed_state and change struct drbg_state's ->seeded member's type from bool to that type. For facilitating review, enum drbg_seed_state is made to only contain two members corresponding to the former ->seeded values of false and true resp. at this point: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. A third one for tracking the intermediate state of "seeded from jitterentropy only" will be introduced with a subsequent patch. There is no change in behaviour at this point. Signed-off-by: Nicolai Stange Reviewed-by: Stephan Müller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 157f12847d8920d413e386e1e4fcf16097916b37 Author: Stephan Müller Date: Fri Apr 17 21:34:03 2020 +0200 crypto: drbg - always seeded with SP800-90B compliant noise source commit 97f2650e504033376e8813691cb6eccf73151676 upstream. As the Jitter RNG provides an SP800-90B compliant noise source, use this noise source always for the (re)seeding of the DRBG. To make sure the DRBG is always properly seeded, the reseed threshold is reduced to 1<<20 generate operations. The Jitter RNG may report health test failures. Such health test failures are treated as transient as follows. The DRBG will not reseed from the Jitter RNG (but from get_random_bytes) in case of a health test failure. Though, it produces the requested random number. The Jitter RNG has a failure counter where at most 1024 consecutive resets due to a health test failure are considered as a transient error. If more consecutive resets are required, the Jitter RNG will return a permanent error which is returned to the caller by the DRBG. With this approach, the worst case reseed threshold is significantly lower than mandated by SP800-90A in order to seed with an SP800-90B noise source: the DRBG has a reseed threshold of 2^20 * 1024 = 2^30 generate requests. Yet, in case of a transient Jitter RNG health test failure, the DRBG is seeded with the data obtained from get_random_bytes. However, if the Jitter RNG fails during the initial seeding operation even due to a health test error, the DRBG will send an error to the caller because at that time, the DRBG has received no seed that is SP800-90B compliant. Signed-off-by: Stephan Mueller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 674ffec1feb14c9e1a00c56703c0fdbe9e13dbcd Author: Stephan Mueller Date: Wed May 8 16:19:24 2019 +0200 crypto: drbg - add FIPS 140-2 CTRNG for noise source commit db07cd26ac6a418dc2823187958edcfdb415fa83 upstream. FIPS 140-2 section 4.9.2 requires a continuous self test of the noise source. Up to kernel 4.8 drivers/char/random.c provided this continuous self test. Afterwards it was moved to a location that is inconsistent with the FIPS 140-2 requirements. The relevant patch was e192be9d9a30555aae2ca1dc3aad37cba484cd4a . Thus, the FIPS 140-2 CTRNG is added to the DRBG when it obtains the seed. This patch resurrects the function drbg_fips_continous_test that existed some time ago and applies it to the noise sources. The patch that removed the drbg_fips_continous_test was b3614763059b82c26bdd02ffcb1c016c1132aad0 . The Jitter RNG implements its own FIPS 140-2 self test and thus does not need to be subjected to the test in the DRBG. The patch contains a tiny fix to ensure proper zeroization in case of an error during the Jitter RNG data gathering. Signed-off-by: Stephan Mueller Reviewed-by: Yann Droneaud Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 37f51eebe80ea1b37cadf2df55ac7bc338fb8c56 Author: Jason A. Donenfeld Date: Tue Jun 7 10:40:05 2022 +0200 Revert "random: use static branch for crng_ready()" This reverts upstream commit f5bda35fba615ace70a656d4700423fa6c9bebee from stable. It's not essential and will take some time during 5.19 to work out properly. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit db0011b314d628d1a051ca2c79da0b41cd155bd0 Author: Jason A. Donenfeld Date: Sun May 22 22:25:41 2022 +0200 random: check for signals after page of pool writes commit 1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb upstream. get_random_bytes_user() checks for signals after producing a PAGE_SIZE worth of output, just like /dev/zero does. write_pool() is doing basically the same work (actually, slightly more expensive), and so should stop to check for signals in the same way. Let's also name it write_pool_user() to match get_random_bytes_user(), so this won't be misused in the future. Before this patch, massive writes to /dev/urandom would tie up the process for an extremely long time and make it unterminatable. After, it can be successfully interrupted. The following test program can be used to see this works as intended: #include #include #include #include static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid = getpid(), child; int fd; signal(SIGUSR1, handle); if (!(child = fork())) { for (;;) kill(pid, SIGUSR1); } fd = open("/dev/urandom", O_WRONLY); pause(); printf("interrupted after writing %zd bytes\n", write(fd, x, sizeof(x))); close(fd); kill(child, SIGTERM); return 0; } Result before: "interrupted after writing 2147479552 bytes" Result after: "interrupted after writing 4096 bytes" Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit af41143f98f4ef5032f7b661566f71a52c6bc75d Author: Jens Axboe Date: Thu May 19 17:31:37 2022 -0600 random: wire up fops->splice_{read,write}_iter() commit 79025e727a846be6fd215ae9cdb654368ac3f9a6 upstream. Now that random/urandom is using {read,write}_iter, we can wire it up to using the generic splice handlers. Fixes: 36e2c7421f02 ("fs: don't allow splice read/write without explicit ops") Signed-off-by: Jens Axboe [Jason: added the splice_write path. Note that sendfile() and such still does not work for read, though it does for write, because of a file type restriction in splice_direct_to_actor(), which I'll address separately.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit fa81f430d0595427d4c834f590a1bd5cad7506d4 Author: Jens Axboe Date: Thu May 19 17:43:15 2022 -0600 random: convert to using fops->write_iter() commit 22b0a222af4df8ee9bb8e07013ab44da9511b047 upstream. Now that the read side has been converted to fix a regression with splice, convert the write side as well to have some symmetry in the interface used (and help deprecate ->write()). Signed-off-by: Jens Axboe [Jason: cleaned up random_ioctl a bit, require full writes in RNDADDENTROPY since it's crediting entropy, simplify control flow of write_pool(), and incorporate suggestions from Al.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 46eac53a047ef12887e73ca7880a49f74674b615 Author: Jason A. Donenfeld Date: Sat May 14 13:59:30 2022 +0200 random: move randomize_page() into mm where it belongs commit 5ad7dd882e45d7fe432c32e896e2aaa0b21746ea upstream. randomize_page is an mm function. It is documented like one. It contains the history of one. It has the naming convention of one. It looks just like another very similar function in mm, randomize_stack_top(). And it has always been maintained and updated by mm people. There is no need for it to be in random.c. In the "which shape does not look like the other ones" test, pointing to randomize_page() is correct. So move randomize_page() into mm/util.c, right next to the similar randomize_stack_top() function. This commit contains no actual code changes. Cc: Andrew Morton Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 434eb68a285fe5e9376deaf034a1669fc06b8019 Author: Jason A. Donenfeld Date: Fri May 13 16:17:12 2022 +0200 random: move initialization functions out of hot pages commit 560181c27b582557d633ecb608110075433383af upstream. Much of random.c is devoted to initializing the rng and accounting for when a sufficient amount of entropy has been added. In a perfect world, this would all happen during init, and so we could mark these functions as __init. But in reality, this isn't the case: sometimes the rng only finishes initializing some seconds after system init is finished. For this reason, at the moment, a whole host of functions that are only used relatively close to system init and then never again are intermixed with functions that are used in hot code all the time. This creates more cache misses than necessary. In order to pack the hot code closer together, this commit moves the initialization functions that can't be marked as __init into .text.unlikely by way of the __cold attribute. Of particular note is moving credit_init_bits() into a macro wrapper that inlines the crng_ready() static branch check. This avoids a function call to a nop+ret, and most notably prevents extra entropy arithmetic from being computed in mix_interrupt_randomness(). Reviewed-by: Dominik Brodowski [ Jason: for stable, made sure the printk_deferred was a pr_notice, because those caused problems on ≤ 4.19 according to commit logs. ] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e279897ef0753653c110032e9d96efcf53e32cbc Author: Jason A. Donenfeld Date: Fri May 13 12:32:23 2022 +0200 random: use proper return types on get_random_{int,long}_wait() commit 7c3a8a1db5e03d02cc0abb3357a84b8b326dfac3 upstream. Before these were returning signed values, but the API is intended to be used with unsigned values. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9bb0c18aeff8dcbc64fb143cf7836f17a8f8dc8d Author: Jason A. Donenfeld Date: Fri May 13 12:29:38 2022 +0200 random: remove extern from functions in header commit 7782cfeca7d420e8bb707613d4cfb0f7ff29bb3a upstream. Accoriding to the kernel style guide, having `extern` on functions in headers is old school and deprecated, and doesn't add anything. So remove them from random.h, and tidy up the file a little bit too. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 6de752dc0f557cf08a015f50f78ca3f76d9c86e6 Author: Jason A. Donenfeld Date: Tue May 3 15:30:45 2022 +0200 random: use static branch for crng_ready() commit f5bda35fba615ace70a656d4700423fa6c9bebee upstream. Since crng_ready() is only false briefly during initialization and then forever after becomes true, we don't need to evaluate it after, making it a prime candidate for a static branch. One complication, however, is that it changes state in a particular call to credit_init_bits(), which might be made from atomic context, which means we must kick off a workqueue to change the static key. Further complicating things, credit_init_bits() may be called sufficiently early on in system initialization such that system_wq is NULL. Fortunately, there exists the nice function execute_in_process_context(), which will immediately execute the function if !in_interrupt(), and otherwise defer it to a workqueue. During early init, before workqueues are available, in_interrupt() is always false, because interrupts haven't even been enabled yet, which means the function in that case executes immediately. Later on, after workqueues are available, in_interrupt() might be true, but in that case, the work is queued in system_wq and all goes well. Cc: Theodore Ts'o Cc: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d4f9d62c755e0a92055e35ccae32902daf5a7790 Author: Jason A. Donenfeld Date: Thu May 12 15:32:26 2022 +0200 random: credit architectural init the exact amount commit 12e45a2a6308105469968951e6d563e8f4fea187 upstream. RDRAND and RDSEED can fail sometimes, which is fine. We currently initialize the RNG with 512 bits of RDRAND/RDSEED. We only need 256 bits of those to succeed in order to initialize the RNG. Instead of the current "all or nothing" approach, actually credit these contributions the amount that is actually contributed. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 47172dad2bf36f28b9d0d2a03f00aff0effa9ae3 Author: Jason A. Donenfeld Date: Thu May 5 02:20:22 2022 +0200 random: handle latent entropy and command line from random_init() commit 2f14062bb14b0fcfcc21e6dc7d5b5c0d25966164 upstream. Currently, start_kernel() adds latent entropy and the command line to the entropy bool *after* the RNG has been initialized, deferring when it's actually used by things like stack canaries until the next time the pool is seeded. This surely is not intended. Rather than splitting up which entropy gets added where and when between start_kernel() and random_init(), just do everything in random_init(), which should eliminate these kinds of bugs in the future. While we're at it, rename the awkwardly titled "rand_initialize()" to the more standard "random_init()" nomenclature. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d54f11df8cdd6916eedecf5c2741598d9238c359 Author: Jason A. Donenfeld Date: Tue May 10 15:20:42 2022 +0200 random: use proper jiffies comparison macro commit 8a5b8a4a4ceb353b4dd5bafd09e2b15751bcdb51 upstream. This expands to exactly the same code that it replaces, but makes things consistent by using the same macro for jiffy comparisons throughout. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3ea5fefec0814a2d2662dfe2ad66f03ce1e74136 Author: Jason A. Donenfeld Date: Mon May 9 16:13:18 2022 +0200 random: remove ratelimiting for in-kernel unseeded randomness commit cc1e127bfa95b5fb2f9307e7168bf8b2b45b4c5e upstream. The CONFIG_WARN_ALL_UNSEEDED_RANDOM debug option controls whether the kernel warns about all unseeded randomness or just the first instance. There's some complicated rate limiting and comparison to the previous caller, such that even with CONFIG_WARN_ALL_UNSEEDED_RANDOM enabled, developers still don't see all the messages or even an accurate count of how many were missed. This is the result of basically parallel mechanisms aimed at accomplishing more or less the same thing, added at different points in random.c history, which sort of compete with the first-instance-only limiting we have now. It turns out, however, that nobody cares about the first unseeded randomness instance of in-kernel users. The same first user has been there for ages now, and nobody is doing anything about it. It isn't even clear that anybody _can_ do anything about it. Most places that can do something about it have switched over to using get_random_bytes_wait() or wait_for_random_bytes(), which is the right thing to do, but there is still much code that needs randomness sometimes during init, and as a geeneral rule, if you're not using one of the _wait functions or the readiness notifier callback, you're bound to be doing it wrong just based on that fact alone. So warning about this same first user that can't easily change is simply not an effective mechanism for anything at all. Users can't do anything about it, as the Kconfig text points out -- the problem isn't in userspace code -- and kernel developers don't or more often can't react to it. Instead, show the warning for all instances when CONFIG_WARN_ALL_UNSEEDED_RANDOM is set, so that developers can debug things need be, or if it isn't set, don't show a warning at all. At the same time, CONFIG_WARN_ALL_UNSEEDED_RANDOM now implies setting random.ratelimit_disable=1 on by default, since if you care about one you probably care about the other too. And we can clean up usage around the related urandom_warning ratelimiter as well (whose behavior isn't changing), so that it properly counts missed messages after the 10 message threshold is reached. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 39cd44bdb50491c98dcd56de075797360f9f271b Author: Jason A. Donenfeld Date: Mon May 9 13:40:55 2022 +0200 random: avoid initializing twice in credit race commit fed7ef061686cc813b1f3d8d0edc6c35b4d3537b upstream. Since all changes of crng_init now go through credit_init_bits(), we can fix a long standing race in which two concurrent callers of credit_init_bits() have the new bit count >= some threshold, but are doing so with crng_init as a lower threshold, checked outside of a lock, resulting in crng_reseed() or similar being called twice. In order to fix this, we can use the original cmpxchg value of the bit count, and only change crng_init when the bit count transitions from below a threshold to meeting the threshold. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 87fccf8de1ae039750df7118d0d6f12656af2254 Author: Jason A. Donenfeld Date: Sun May 8 13:20:30 2022 +0200 random: use symbolic constants for crng_init states commit e3d2c5e79a999aa4e7d6f0127e16d3da5a4ff70d upstream. crng_init represents a state machine, with three states, and various rules for transitions. For the longest time, we've been managing these with "0", "1", and "2", and expecting people to figure it out. To make the code more obvious, replace these with proper enum values representing the transition, and then redocument what each of these states mean. Reviewed-by: Dominik Brodowski Cc: Joe Perches Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 99de0356b3a22b42c484b992170ef5c6371a5d5f Author: Jason A. Donenfeld Date: Sat May 7 14:03:46 2022 +0200 siphash: use one source of truth for siphash permutations commit e73aaae2fa9024832e1f42e30c787c7baf61d014 upstream. The SipHash family of permutations is currently used in three places: - siphash.c itself, used in the ordinary way it was intended. - random32.c, in a construction from an anonymous contributor. - random.c, as part of its fast_mix function. Each one of these places reinvents the wheel with the same C code, same rotation constants, and same symmetry-breaking constants. This commit tidies things up a bit by placing macros for the permutations and constants into siphash.h, where each of the three .c users can access them. It also leaves a note dissuading more users of them from emerging. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7bbe75d780dee3d0a1b1018ee7fb296fab0063c2 Author: Jason A. Donenfeld Date: Fri May 6 23:19:43 2022 +0200 random: help compiler out with fast_mix() by using simpler arguments commit 791332b3cbb080510954a4c152ce02af8832eac9 upstream. Now that fast_mix() has more than one caller, gcc no longer inlines it. That's fine. But it also doesn't handle the compound literal argument we pass it very efficiently, nor does it handle the loop as well as it could. So just expand the code to spell out this function so that it generates the same code as it did before. Performance-wise, this now behaves as it did before the last commit. The difference in actual code size on x86 is 45 bytes, which is less than a cache line. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7339176128964aaf2839e8c770d6fc5a378d72c9 Author: Jason A. Donenfeld Date: Fri May 6 18:30:51 2022 +0200 random: do not use input pool from hard IRQs commit e3e33fc2ea7fcefd0d761db9d6219f83b4248f5c upstream. Years ago, a separate fast pool was added for interrupts, so that the cost associated with taking the input pool spinlocks and mixing into it would be avoided in places where latency is critical. However, one oversight was that add_input_randomness() and add_disk_randomness() still sometimes are called directly from the interrupt handler, rather than being deferred to a thread. This means that some unlucky interrupts will be caught doing a blake2s_compress() call and potentially spinning on input_pool.lock, which can also be taken by unprivileged users by writing into /dev/urandom. In order to fix this, add_timer_randomness() now checks whether it is being called from a hard IRQ and if so, just mixes into the per-cpu IRQ fast pool using fast_mix(), which is much faster and can be done lock-free. A nice consequence of this, as well, is that it means hard IRQ context FPU support is likely no longer useful. The entropy estimation algorithm used by add_timer_randomness() is also somewhat different than the one used for add_interrupt_randomness(). The former looks at deltas of deltas of deltas, while the latter just waits for 64 interrupts for one bit or for one second since the last bit. In order to bridge these, and since add_interrupt_randomness() runs after an add_timer_randomness() that's called from hard IRQ, we add to the fast pool credit the related amount, and then subtract one to account for add_interrupt_randomness()'s contribution. A downside of this, however, is that the num argument is potentially attacker controlled, which puts a bit more pressure on the fast_mix() sponge to do more than it's really intended to do. As a mitigating factor, the first 96 bits of input aren't attacker controlled (a cycle counter followed by zeros), which means it's essentially two rounds of siphash rather than one, which is somewhat better. It's also not that much different from add_interrupt_randomness()'s use of the irq stack instruction pointer register. Cc: Thomas Gleixner Cc: Filipe Manana Cc: Peter Zijlstra Cc: Borislav Petkov Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit b60d7cbd6e69728de3f1edc33effd7e24131bc08 Author: Jason A. Donenfeld Date: Fri May 6 18:27:38 2022 +0200 random: order timer entropy functions below interrupt functions commit a4b5c26b79ffdfcfb816c198f2fc2b1e7b5b580f upstream. There are no code changes here; this is just a reordering of functions, so that in subsequent commits, the timer entropy functions can call into the interrupt ones. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4ca20665c81bf2f2244f72ec87729da8e8bdfb3c Author: Jason A. Donenfeld Date: Sat Apr 30 22:03:29 2022 +0200 random: do not pretend to handle premature next security model commit e85c0fc1d94c52483a603651748d4c76d6aa1c6b upstream. Per the thread linked below, "premature next" is not considered to be a realistic threat model, and leads to more serious security problems. "Premature next" is the scenario in which: - Attacker compromises the current state of a fully initialized RNG via some kind of infoleak. - New bits of entropy are added directly to the key used to generate the /dev/urandom stream, without any buffering or pooling. - Attacker then, somehow having read access to /dev/urandom, samples RNG output and brute forces the individual new bits that were added. - Result: the RNG never "recovers" from the initial compromise, a so-called violation of what academics term "post-compromise security". The usual solutions to this involve some form of delaying when entropy gets mixed into the crng. With Fortuna, this involves multiple input buckets. With what the Linux RNG was trying to do prior, this involves entropy estimation. However, by delaying when entropy gets mixed in, it also means that RNG compromises are extremely dangerous during the window of time before the RNG has gathered enough entropy, during which time nonces may become predictable (or repeated), ephemeral keys may not be secret, and so forth. Moreover, it's unclear how realistic "premature next" is from an attack perspective, if these attacks even make sense in practice. Put together -- and discussed in more detail in the thread below -- these constitute grounds for just doing away with the current code that pretends to handle premature next. I say "pretends" because it wasn't doing an especially great job at it either; should we change our mind about this direction, we would probably implement Fortuna to "fix" the "problem", in which case, removing the pretend solution still makes sense. This also reduces the crng reseed period from 5 minutes down to 1 minute. The rationale from the thread might lead us toward reducing that even further in the future (or even eliminating it), but that remains a topic of a future commit. At a high level, this patch changes semantics from: Before: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every five minutes, but only if 256 new "bits" have been accumulated since the last reseeding. After: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every minute. Most of this patch is renaming and removing: POOL_MIN_BITS becomes POOL_INIT_BITS, credit_entropy_bits() becomes credit_init_bits(), crng_reseed() loses its "force" parameter since it's now always true, the drain_entropy() function no longer has any use so it's removed, entropy estimation is skipped if we've already init'd, the various notifiers for "low on entropy" are now only active prior to init, and finally, some documentation comments are cleaned up here and there. Link: https://lore.kernel.org/lkml/YmlMGx6+uigkGiZ0@zx2c4.com/ Cc: Theodore Ts'o Cc: Nadia Heninger Cc: Tom Ristenpart Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a14dba363dabba3491b39081200b0f5a7c6c8b71 Author: Jason A. Donenfeld Date: Tue May 3 14:14:32 2022 +0200 random: do not use batches when !crng_ready() commit cbe89e5a375a51bbb952929b93fa973416fea74e upstream. It's too hard to keep the batches synchronized, and pointless anyway, since in !crng_ready(), we're updating the base_crng key really often, where batching only hurts. So instead, if the crng isn't ready, just call into get_random_bytes(). At this stage nothing is performance critical anyhow. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 1d4ea7af8cad2198ac166223950292db2c9b40a1 Author: Jason A. Donenfeld Date: Tue Apr 12 19:59:57 2022 +0200 random: insist on random_get_entropy() existing in order to simplify commit 4b758eda851eb9336ca86a0041a4d3da55f66511 upstream. All platforms are now guaranteed to provide some value for random_get_entropy(). In case some bug leads to this not being so, we print a warning, because that indicates that something is really very wrong (and likely other things are impacted too). This should never be hit, but it's a good and cheap way of finding out if something ever is problematic. Since we now have viable fallback code for random_get_entropy() on all platforms, which is, in the worst case, not worse than jiffies, we can count on getting the best possible value out of it. That means there's no longer a use for using jiffies as entropy input. It also means we no longer have a reason for doing the round-robin register flow in the IRQ handler, which was always of fairly dubious value. Instead we can greatly simplify the IRQ handler inputs and also unify the construction between 64-bits and 32-bits. We now collect the cycle counter and the return address, since those are the two things that matter. Because the return address and the irq number are likely related, to the extent we mix in the irq number, we can just xor it into the top unchanging bytes of the return address, rather than the bottom changing bytes of the cycle counter as before. Then, we can do a fixed 2 rounds of SipHash/HSipHash. Finally, we use the same construction of hashing only half of the [H]SipHash state on 32-bit and 64-bit. We're not actually discarding any entropy, since that entropy is carried through until the next time. And more importantly, it lets us do the same sponge-like construction everywhere. Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit f4f425f32cd339964c9d618165975160177f15dc Author: Yury Norov Date: Thu Jan 30 22:16:40 2020 -0800 uapi: rename ext2_swab() to swab() and share globally in swab.h [ Upstream commit d5767057c9a76a29f073dad66b7fa12a90e8c748 ] ext2_swab() is defined locally in lib/find_bit.c However it is not specific to ext2, neither to bitmaps. There are many potential users of it, so rename it to just swab() and move to include/uapi/linux/swab.h ABI guarantees that size of unsigned long corresponds to BITS_PER_LONG, therefore drop unneeded cast. Link: http://lkml.kernel.org/r/20200103202846.21616-1-yury.norov@gmail.com Signed-off-by: Yury Norov Cc: Allison Randal Cc: Joe Perches Cc: Thomas Gleixner Cc: William Breathitt Gray Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit fc2a5485e5ec8c7796ccdb4a117b105ae2e202e2 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 xtensa: use fallback for random_get_entropy() instead of zero commit e10e2f58030c5c211d49042a8c2a1b93d40b2ffb upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Max Filippov Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit b4cdbb50d8926feb46833efbd35f45f7342c8ba6 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 sparc: use fallback for random_get_entropy() instead of zero commit ac9756c79797bb98972736b13cfb239fd2cffb79 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: David S. Miller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 74795d08407304687a282b6faf666799b9cf2913 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 um: use fallback for random_get_entropy() instead of zero commit 9f13fb0cd11ed2327abff69f6501a2c124c88b5a upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Weinberger Cc: Anton Ivanov Acked-by: Johannes Berg Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit fce24b9f310b118f747ebdd0bb380be98f96a030 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 x86/tsc: Use fallback for random_get_entropy() instead of zero commit 3bd4abc07a267e6a8b33d7f8717136e18f921c53 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is suboptimal. Instead, fallback to calling random_get_entropy_fallback(), which isn't extremely high precision or guaranteed to be entropic, but is certainly better than returning zero all the time. If CONFIG_X86_TSC=n, then it's possible for the kernel to run on systems without RDTSC, such as 486 and certain 586, so the fallback code is only required for that case. As well, fix up both the new function and the get_cycles() function from which it was derived to use cpu_feature_enabled() rather than boot_cpu_has(), and use !IS_ENABLED() instead of #ifndef. Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Borislav Petkov Cc: x86@kernel.org Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit df43ba7d7f56c5c31f58b55cdf604e4387cf9a52 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 nios2: use fallback for random_get_entropy() instead of zero commit c04e72700f2293013dab40208e809369378f224c upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Dinh Nguyen Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d2e76a31c64f1b8b4d7ffeb2671c914ebc15285b Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 arm: use fallback for random_get_entropy() instead of zero commit ff8a8f59c99f6a7c656387addc4d9f2247d75077 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Reviewed-by: Russell King (Oracle) Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d0038968f8eb1bc7ce8d9f01c15c1dbf5adf5de9 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 mips: use fallback for random_get_entropy() instead of just c0 random commit 1c99c6a7c3c599a68321b01b9ec243215ede5a68 upstream. For situations in which we don't have a c0 counter register available, we've been falling back to reading the c0 "random" register, which is usually bounded by the amount of TLB entries and changes every other cycle or so. This means it wraps extremely often. We can do better by combining this fast-changing counter with a potentially slower-changing counter from random_get_entropy_fallback() in the more significant bits. This commit combines the two, taking into account that the changing bits are in a different bit position depending on the CPU model. In addition, we previously were falling back to 0 for ancient CPUs that Linux does not support anyway; remove that dead path entirely. Cc: Thomas Gleixner Cc: Arnd Bergmann Tested-by: Maciej W. Rozycki Acked-by: Thomas Bogendoerfer Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit fb4434de3b6a050072a48b143bb9860b05be84d8 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 m68k: use fallback for random_get_entropy() instead of zero commit 0f392c95391f2d708b12971a07edaa7973f9eece upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Geert Uytterhoeven Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit bb61b00db2cde0ba4eae474c9b5a9b7258a093c7 Author: Jason A. Donenfeld Date: Sun Apr 10 16:49:50 2022 +0200 timekeeping: Add raw clock fallback for random_get_entropy() commit 1366992e16bddd5e2d9a561687f367f9f802e2e4 upstream. The addition of random_get_entropy_fallback() provides access to whichever time source has the highest frequency, which is useful for gathering entropy on platforms without available cycle counters. It's not necessarily as good as being able to quickly access a cycle counter that the CPU has, but it's still something, even when it falls back to being jiffies-based. In the event that a given arch does not define get_cycles(), falling back to the get_cycles() default implementation that returns 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Finally, since random_get_entropy_fallback() is used during extremely early boot when randomizing freelists in mm_init(), it can be called before timekeeping has been initialized. In that case there really is nothing we can do; jiffies hasn't even started ticking yet. So just give up and return 0. Suggested-by: Thomas Gleixner Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Arnd Bergmann Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 0cf45867fa6c6accb9c49803b65192c3a50d265f Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 powerpc: define get_cycles macro for arch-override commit 408835832158df0357e18e96da7f2d1ed6b80e7f upstream. PowerPC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Acked-by: Michael Ellerman Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 72b90d9cdfe9af96996eab1941a8d7af02c8d073 Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 alpha: define get_cycles macro for arch-override commit 1097710bc9660e1e588cf2186a35db3d95c4d258 upstream. Alpha defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Acked-by: Matt Turner Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a99753e73cd1bc676d7a9bf10dd3568dfe7f1c32 Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 parisc: define get_cycles macro for arch-override commit 8865bbe6ba1120e67f72201b7003a16202cd42be upstream. PA-RISC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Helge Deller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 28528a7bebbd880e0813efeed64963fda38161ad Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 s390: define get_cycles macro for arch-override commit 2e3df523256cb9836de8441e9c791a796759bb3c upstream. S390x defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Acked-by: Heiko Carstens Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 115d4bd0dee2ee63cf595a874f46d09a1e8542ce Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 ia64: define get_cycles macro for arch-override commit 57c0900b91d8891ab43f0e6b464d059fda51d102 upstream. Itanium defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9af95dfaa1a8edcb81e6cb544e072826745242ff Author: Jason A. Donenfeld Date: Thu May 5 02:20:22 2022 +0200 init: call time_init() before rand_initialize() commit fe222a6ca2d53c38433cba5d3be62a39099e708e upstream. Currently time_init() is called after rand_initialize(), but rand_initialize() makes use of the timer on various platforms, and sometimes this timer needs to be initialized by time_init() first. In order for random_get_entropy() to not return zero during early boot when it's potentially used as an entropy source, reverse the order of these two calls. The block doing random initialization was right before time_init() before, so changing the order shouldn't have any complicated effects. Cc: Andrew Morton Reviewed-by: Stafford Horne Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit daddeef50b22f824100982bb28690620e2d06ace Author: Jason A. Donenfeld Date: Tue May 3 21:43:58 2022 +0200 random: fix sysctl documentation nits commit 069c4ea6871c18bd368f27756e0f91ffb524a788 upstream. A semicolon was missing, and the almost-alphabetical-but-not ordering was confusing, so regroup these by category instead. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 6906938c1cb3d23900e474dc51713979f3cfb3d8 Author: Jason A. Donenfeld Date: Mon Apr 18 20:57:31 2022 +0200 random: document crng_fast_key_erasure() destination possibility commit 8717627d6ac53251ee012c3c7aca392f29f38a42 upstream. This reverts 35a33ff3807d ("random: use memmove instead of memcpy for remaining 32 bytes"), which was made on a totally bogus basis. The thing it was worried about overlapping came from the stack, not from one of its arguments, as Eric pointed out. But the fact that this confusion even happened draws attention to the fact that it's a bit non-obvious that the random_data parameter can alias chacha_state, and in fact should do so when the caller can't rely on the stack being cleared in a timely manner. So this commit documents that. Reported-by: Eric Biggers Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 51fc7c5bf5f34416a2151b5a1f4eefdce8ddeaef Author: Jason A. Donenfeld Date: Fri Apr 8 18:14:57 2022 +0200 random: make random_get_entropy() return an unsigned long commit b0c3e796f24b588b862b61ce235d3c9417dc8983 upstream. Some implementations were returning type `unsigned long`, while others that fell back to get_cycles() were implicitly returning a `cycles_t` or an untyped constant int literal. That makes for weird and confusing code, and basically all code in the kernel already handled it like it was an `unsigned long`. I recently tried to handle it as the largest type it could be, a `cycles_t`, but doing so doesn't really help with much. Instead let's just make random_get_entropy() return an unsigned long all the time. This also matches the commonly used `arch_get_random_long()` function, so now RDRAND and RDTSC return the same sized integer, which means one can fallback to the other more gracefully. Cc: Dominik Brodowski Cc: Theodore Ts'o Acked-by: Thomas Gleixner Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 5f6d77400ca9bbbef99b558a7dbceba5e1c072f7 Author: Jason A. Donenfeld Date: Wed Apr 6 02:36:16 2022 +0200 random: check for signals every PAGE_SIZE chunk of /dev/[u]random commit e3c1c4fd9e6d14059ed93ebfe15e1c57793b1a05 upstream. In 1448769c9cdb ("random: check for signal_pending() outside of need_resched() check"), Jann pointed out that we previously were only checking the TIF_NOTIFY_SIGNAL and TIF_SIGPENDING flags if the process had TIF_NEED_RESCHED set, which meant in practice, super long reads to /dev/[u]random would delay signal handling by a long time. I tried this using the below program, and indeed I wasn't able to interrupt a /dev/urandom read until after several megabytes had been read. The bug he fixed has always been there, and so code that reads from /dev/urandom without checking the return value of read() has mostly worked for a long time, for most sizes, not just for <= 256. Maybe it makes sense to keep that code working. The reason it was so small prior, ignoring the fact that it didn't work anyway, was likely because /dev/random used to block, and that could happen for pretty large lengths of time while entropy was gathered. But now, it's just a chacha20 call, which is extremely fast and is just operating on pure data, without having to wait for some external event. In that sense, /dev/[u]random is a lot more like /dev/zero. Taking a page out of /dev/zero's read_zero() function, it always returns at least one chunk, and then checks for signals after each chunk. Chunk sizes there are of length PAGE_SIZE. Let's just copy the same thing for /dev/[u]random, and check for signals and cond_resched() for every PAGE_SIZE amount of data. This makes the behavior more consistent with expectations, and should mitigate the impact of Jann's fix for the age-old signal check bug. ---- test program ---- #include #include #include #include static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid = getpid(), child; signal(SIGUSR1, handle); if (!(child = fork())) { for (;;) kill(pid, SIGUSR1); } pause(); printf("interrupted after reading %zd bytes\n", getrandom(x, sizeof(x), 0)); kill(child, SIGTERM); return 0; } Cc: Jann Horn Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit f68316bb5cb7e88297ba9cc4701be8f0be21d70b Author: Jann Horn Date: Tue Apr 5 18:39:31 2022 +0200 random: check for signal_pending() outside of need_resched() check commit 1448769c9cdb69ad65287f4f7ab58bc5f2f5d7ba upstream. signal_pending() checks TIF_NOTIFY_SIGNAL and TIF_SIGPENDING, which signal that the task should bail out of the syscall when possible. This is a separate concept from need_resched(), which checks TIF_NEED_RESCHED, signaling that the task should preempt. In particular, with the current code, the signal_pending() bailout probably won't work reliably. Change this to look like other functions that read lots of data, such as read_zero(). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jann Horn Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 20039f06608144746b1ecfdbea71deb5623e831f Author: Jason A. Donenfeld Date: Tue Apr 5 16:40:51 2022 +0200 random: do not allow user to keep crng key around on stack commit aba120cc101788544aa3e2c30c8da88513892350 upstream. The fast key erasure RNG design relies on the key that's used to be used and then discarded. We do this, making judicious use of memzero_explicit(). However, reads to /dev/urandom and calls to getrandom() involve a copy_to_user(), and userspace can use FUSE or userfaultfd, or make a massive call, dynamically remap memory addresses as it goes, and set the process priority to idle, in order to keep a kernel stack alive indefinitely. By probing /proc/sys/kernel/random/entropy_avail to learn when the crng key is refreshed, a malicious userspace could mount this attack every 5 minutes thereafter, breaking the crng's forward secrecy. In order to fix this, we just overwrite the stack's key with the first 32 bytes of the "free" fast key erasure output. If we're returning <= 32 bytes to the user, then we can still return those bytes directly, so that short reads don't become slower. And for long reads, the difference is hopefully lost in the amortization, so it doesn't change much, with that amortization helping variously for medium reads. We don't need to do this for get_random_bytes() and the various kernel-space callers, and later, if we ever switch to always batching, this won't be necessary either, so there's no need to change the API of these functions. Cc: Theodore Ts'o Reviewed-by: Jann Horn Fixes: c92e040d575a ("random: add backtracking protection to the CRNG") Fixes: 186873c549df ("random: use simpler fast key erasure flow on per-cpu keys") Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d1839b8f5a81b45fb6ac7eb875b5c6cab01b8fdd Author: Jan Varho Date: Mon Apr 4 19:42:30 2022 +0300 random: do not split fast init input in add_hwgenerator_randomness() commit 527a9867af29ff89f278d037db704e0ed50fb666 upstream. add_hwgenerator_randomness() tries to only use the required amount of input for fast init, but credits all the entropy, rather than a fraction of it. Since it's hard to determine how much entropy is left over out of a non-unformly random sample, either give it all to fast init or credit it, but don't attempt to do both. In the process, we can clean up the injection code to no longer need to return a value. Signed-off-by: Jan Varho [Jason: expanded commit message] Fixes: 73c7733f122e ("random: do not throw away excess input to crng_fast_load") Cc: stable@vger.kernel.org # 5.17+, requires af704c856e88 Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit dcc2de07f933753fe8cda60c46a53e674a28f7b0 Author: Jason A. Donenfeld Date: Thu Mar 31 11:01:01 2022 -0400 random: mix build-time latent entropy into pool at init commit 1754abb3e7583c570666fa1e1ee5b317e88c89a0 upstream. Prior, the "input_pool_data" array needed no real initialization, and so it was easy to mark it with __latent_entropy to populate it during compile-time. In switching to using a hash function, this required us to specifically initialize it to some specific state, which means we dropped the __latent_entropy attribute. An unfortunate side effect was this meant the pool was no longer seeded using compile-time random data. In order to bring this back, we declare an array in rand_initialize() with __latent_entropy and call mix_pool_bytes() on that at init, which accomplishes the same thing as before. We make this __initconst, so that it doesn't take up space at runtime after init. Fixes: 6e8ec2552c7d ("random: use computational hash for entropy extraction") Reviewed-by: Dominik Brodowski Reviewed-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c2e2f3287b093ae702989fb5278af48249fff1f1 Author: Jason A. Donenfeld Date: Tue Mar 22 22:21:52 2022 -0600 random: re-add removed comment about get_random_{u32,u64} reseeding commit dd7aa36e535797926d8eb311da7151919130139d upstream. The comment about get_random_{u32,u64}() not invoking reseeding got added in an unrelated commit, that then was recently reverted by 0313bc278dac ("Revert "random: block in /dev/urandom""). So this adds that little comment snippet back, and improves the wording a bit too. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 817f35f3be9306b7201687ac62094ff5de3852be Author: Jason A. Donenfeld Date: Tue Mar 22 21:43:12 2022 -0600 random: treat bootloader trust toggle the same way as cpu trust toggle commit d97c68d178fbf8aaaf21b69b446f2dfb13909316 upstream. If CONFIG_RANDOM_TRUST_CPU is set, the RNG initializes using RDRAND. But, the user can disable (or enable) this behavior by setting `random.trust_cpu=0/1` on the kernel command line. This allows system builders to do reasonable things while avoiding howls from tinfoil hatters. (Or vice versa.) CONFIG_RANDOM_TRUST_BOOTLOADER is basically the same thing, but regards the seed passed via EFI or device tree, which might come from RDRAND or a TPM or somewhere else. In order to allow distros to more easily enable this while avoiding those same howls (or vice versa), this commit adds the corresponding `random.trust_bootloader=0/1` toggle. Cc: Theodore Ts'o Cc: Graham Christensen Reviewed-by: Ard Biesheuvel Reviewed-by: Dominik Brodowski Link: https://github.com/NixOS/nixpkgs/pull/165355 Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4dd01ce6929e96ec57995d17eeec0551c6fc145b Author: Jason A. Donenfeld Date: Mon Mar 21 18:48:05 2022 -0600 random: skip fast_init if hwrng provides large chunk of entropy commit af704c856e888fb044b058d731d61b46eeec499d upstream. At boot time, EFI calls add_bootloader_randomness(), which in turn calls add_hwgenerator_randomness(). Currently add_hwgenerator_randomness() feeds the first 64 bytes of randomness to the "fast init" non-crypto-grade phase. But if add_hwgenerator_randomness() gets called with more than POOL_MIN_BITS of entropy, there's no point in passing it off to the "fast init" stage, since that's enough entropy to bootstrap the real RNG. The "fast init" stage is just there to provide _something_ in the case where we don't have enough entropy to properly bootstrap the RNG. But if we do have enough entropy to bootstrap the RNG, the current logic doesn't serve a purpose. So, in the case where we're passed greater than or equal to POOL_MIN_BITS of entropy, this commit makes us skip the "fast init" phase. Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 26fb4cddfdb3c9bdcd34c098bc54d34a4f70085b Author: Jason A. Donenfeld Date: Tue Mar 8 10:12:16 2022 -0700 random: check for signal and try earlier when generating entropy commit 3e504d2026eb6c8762cd6040ae57db166516824a upstream. Rather than waiting a full second in an interruptable waiter before trying to generate entropy, try to generate entropy first and wait second. While waiting one second might give an extra second for getting entropy from elsewhere, we're already pretty late in the init process here, and whatever else is generating entropy will still continue to contribute. This has implications on signal handling: we call try_to_generate_entropy() from wait_for_random_bytes(), and wait_for_random_bytes() always uses wait_event_interruptible_timeout() when waiting, since it's called by userspace code in restartable contexts, where signals can pend. Since try_to_generate_entropy() now runs first, if a signal is pending, it's necessary for try_to_generate_entropy() to check for signals, since it won't hit the wait until after try_to_generate_entropy() has returned. And even before this change, when entering a busy loop in try_to_generate_entropy(), we should have been checking to see if any signals are pending, so that a process doesn't get stuck in that loop longer than expected. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit b01448758cb0dba27ff7e35487260840e49424eb Author: Jason A. Donenfeld Date: Tue Mar 8 23:32:34 2022 -0700 random: reseed more often immediately after booting commit 7a7ff644aeaf071d433caffb3b8ea57354b55bd3 upstream. In order to chip away at the "premature first" problem, we augment our existing entropy accounting with more frequent reseedings at boot. The idea is that at boot, we're getting entropy from various places, and we're not very sure which of early boot entropy is good and which isn't. Even when we're crediting the entropy, we're still not totally certain that it's any good. Since boot is the one time (aside from a compromise) that we have zero entropy, it's important that we shepherd entropy into the crng fairly often. At the same time, we don't want a "premature next" problem, whereby an attacker can brute force individual bits of added entropy. In lieu of going full-on Fortuna (for now), we can pick a simpler strategy of just reseeding more often during the first 5 minutes after boot. This is still bounded by the 256-bit entropy credit requirement, so we'll skip a reseeding if we haven't reached that, but in case entropy /is/ coming in, this ensures that it makes its way into the crng rather rapidly during these early stages. Ordinarily we reseed if the previous reseeding is 300 seconds old. This commit changes things so that for the first 600 seconds of boot time, we reseed if the previous reseeding is uptime / 2 seconds old. That means that we'll reseed at the very least double the uptime of the previous reseeding. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d234795601aecae22152a77fb9a0b2dd75185cc6 Author: Jason A. Donenfeld Date: Tue Mar 8 11:20:17 2022 -0700 random: make consistent usage of crng_ready() commit a96cfe2d427064325ecbf56df8816c6b871ec285 upstream. Rather than sometimes checking `crng_init < 2`, we should always use the crng_ready() macro, so that should we change anything later, it's consistent. Additionally, that macro already has a likely() around it, which means we don't need to open code our own likely() and unlikely() annotations. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ede726b1633e23ad70f3cc47e14a63bc0c55d10b Author: Jason A. Donenfeld Date: Fri Feb 11 14:58:44 2022 +0100 random: use SipHash as interrupt entropy accumulator commit f5eab0e2db4f881fb2b62b3fdad5b9be673dd7ae upstream. The current fast_mix() function is a piece of classic mailing list crypto, where it just sort of sprung up by an anonymous author without a lot of real analysis of what precisely it was accomplishing. As an ARX permutation alone, there are some easily searchable differential trails in it, and as a means of preventing malicious interrupts, it completely fails, since it xors new data into the entire state every time. It can't really be analyzed as a random permutation, because it clearly isn't, and it can't be analyzed as an interesting linear algebraic structure either, because it's also not that. There really is very little one can say about it in terms of entropy accumulation. It might diffuse bits, some of the time, maybe, we hope, I guess. But for the most part, it fails to accomplish anything concrete. As a reminder, the simple goal of add_interrupt_randomness() is to simply accumulate entropy until ~64 interrupts have elapsed, and then dump it into the main input pool, which uses a cryptographic hash. It would be nice to have something cryptographically strong in the interrupt handler itself, in case a malicious interrupt compromises a per-cpu fast pool within the 64 interrupts / 1 second window, and then inside of that same window somehow can control its return address and cycle counter, even if that's a bit far fetched. However, with a very CPU-limited budget, actually doing that remains an active research project (and perhaps there'll be something useful for Linux to come out of it). And while the abundance of caution would be nice, this isn't *currently* the security model, and we don't yet have a fast enough solution to make it our security model. Plus there's not exactly a pressing need to do that. (And for the avoidance of doubt, the actual cluster of 64 accumulated interrupts still gets dumped into our cryptographically secure input pool.) So, for now we are going to stick with the existing interrupt security model, which assumes that each cluster of 64 interrupt data samples is mostly non-malicious and not colluding with an infoleaker. With this as our goal, we have a few more choices, simply aiming to accumulate entropy, while discarding the least amount of it. We know from that random oracles, instantiated as computational hash functions, make good entropy accumulators and extractors, which is the justification for using BLAKE2s in the main input pool. As mentioned, we don't have that luxury here, but we also don't have the same security model requirements, because we're assuming that there aren't malicious inputs. A pseudorandom function instance can approximately behave like a random oracle, provided that the key is uniformly random. But since we're not concerned with malicious inputs, we can pick a fixed key, which is not secret, knowing that "nature" won't interact with a sufficiently chosen fixed key by accident. So we pick a PRF with a fixed initial key, and accumulate into it continuously, dumping the result every 64 interrupts into our cryptographically secure input pool. For this, we make use of SipHash-1-x on 64-bit and HalfSipHash-1-x on 32-bit, which are already in use in the kernel's hsiphash family of functions and achieve the same performance as the function they replace. It would be nice to do two rounds, but we don't exactly have the CPU budget handy for that, and one round alone is already sufficient. As mentioned, we start with a fixed initial key (zeros is fine), and allow SipHash's symmetry breaking constants to turn that into a useful starting point. Also, since we're dumping the result (or half of it on 64-bit so as to tax our hash function the same amount on all platforms) into the cryptographically secure input pool, there's no point in finalizing SipHash's output, since it'll wind up being finalized by something much stronger. This means that all we need to do is use the ordinary round function word-by-word, as normal SipHash does. Simplified, the flow is as follows: Initialize: siphash_state_t state; siphash_init(&state, key={0, 0, 0, 0}); Update (accumulate) on interrupt: siphash_update(&state, interrupt_data_and_timing); Dump into input pool after 64 interrupts: blake2s_update(&input_pool, &state, sizeof(state) / 2); The result of all of this is that the security model is unchanged from before -- we assume non-malicious inputs -- yet we now implement that model with a stronger argument. I would like to emphasize, again, that the purpose of this commit is to improve the existing design, by making it analyzable, without changing any fundamental assumptions. There may well be value down the road in changing up the existing design, using something cryptographically strong, or simply using a ring buffer of samples rather than having a fast_mix() at all, or changing which and how much data we collect each interrupt so that we can use something linear, or a variety of other ideas. This commit does not invalidate the potential for those in the future. For example, in the future, if we're able to characterize the data we're collecting on each interrupt, we may be able to inch toward information theoretic accumulators. shows that `s = ror32(s, 7) ^ x` and `s = ror64(s, 19) ^ x` make very good accumulators for 2-monotone distributions, which would apply to timestamp counters, like random_get_entropy() or jiffies, but would not apply to our current combination of the two values, or to the various function addresses and register values we mix in. Alternatively, shows that max-period linear functions with no non-trivial invariant subspace make good extractors, used in the form `s = f(s) ^ x`. However, this only works if the input data is both identical and independent, and obviously a collection of address values and counters fails; so it goes with theoretical papers. Future directions here may involve trying to characterize more precisely what we actually need to collect in the interrupt handler, and building something specific around that. However, as mentioned, the morass of data we're gathering at the interrupt handler presently defies characterization, and so we use SipHash for now, which works well and performs well. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3ddb66f469d51fff8bc9d0e1ba852fc4f08bcd3e Author: Jason A. Donenfeld Date: Tue Mar 1 20:03:49 2022 +0100 random: replace custom notifier chain with standard one commit 5acd35487dc911541672b3ffc322851769c32a56 upstream. We previously rolled our own randomness readiness notifier, which only has two users in the whole kernel. Replace this with a more standard atomic notifier block that serves the same purpose with less code. Also unexport the symbols, because no modules use it, only unconditional builtins. The only drawback is that it's possible for a notification handler returning the "stop" code to prevent further processing, but given that there are only two users, and that we're unexporting this anyway, that doesn't seem like a significant drawback for the simplification we receive here. Cc: Greg Kroah-Hartman Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski [Jason: for stable, also backported to crypto/drbg.c, not unexporting.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit b6dd53be9f051b6dd7dc112c01fdfc53b77375d2 Author: Jason A. Donenfeld Date: Mon Feb 28 14:00:52 2022 +0100 random: don't let 644 read-only sysctls be written to commit 77553cf8f44863b31da242cf24671d76ddb61597 upstream. We leave around these old sysctls for compatibility, and we keep them "writable" for compatibility, but even after writing, we should keep reporting the same value. This is consistent with how userspaces tend to use sysctl_random_write_wakeup_bits, writing to it, and then later reading from it and using the value. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ec93b566e617b30a4dddcb55cc0f0b4b13aa69b7 Author: Jason A. Donenfeld Date: Mon Feb 28 13:57:57 2022 +0100 random: give sysctl_random_min_urandom_seed a more sensible value commit d0efdf35a6a71d307a250199af6fce122a7c7e11 upstream. This isn't used by anything or anywhere, but we can't delete it due to compatibility. So at least give it the correct value of what it's supposed to be instead of a garbage one. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c63fce9b6279ddadac9b19b85d9037c508aa8020 Author: Jason A. Donenfeld Date: Sun Feb 13 18:25:07 2022 +0100 random: do crng pre-init loading in worker rather than irq commit c2a7de4feb6e09f23af7accc0f882a8fa92e7ae5 upstream. Taking spinlocks from IRQ context is generally problematic for PREEMPT_RT. That is, in part, why we take trylocks instead. However, a spin_try_lock() is also problematic since another spin_lock() invocation can potentially PI-boost the wrong task, as the spin_try_lock() is invoked from an IRQ-context, so the task on CPU (random task or idle) is not the actual owner. Additionally, by deferring the crng pre-init loading to the worker, we can use the cryptographic hash function rather than xor, which is perhaps a meaningful difference when considering this data has only been through the relatively weak fast_mix() function. The biggest downside of this approach is that the pre-init loading is now deferred until later, which means things that need random numbers after interrupts are enabled, but before workqueues are running -- or before this particular worker manages to run -- are going to get into trouble. Hopefully in the real world, this window is rather small, especially since this code won't run until 64 interrupts had occurred. Cc: Sultan Alsawaf Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Eric Biggers Cc: Theodore Ts'o Acked-by: Sebastian Andrzej Siewior Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7adbe3c33ba386c40f8d1017877df7e7efc85b2f Author: Jason A. Donenfeld Date: Thu Feb 24 18:30:58 2022 +0100 random: unify cycles_t and jiffies usage and types commit abded93ec1e9692920fe309f07f40bd1035f2940 upstream. random_get_entropy() returns a cycles_t, not an unsigned long, which is sometimes 64 bits on various 32-bit platforms, including x86. Conversely, jiffies is always unsigned long. This commit fixes things to use cycles_t for fields that use random_get_entropy(), named "cycles", and unsigned long for fields that use jiffies, named "now". It's also good to mix in a cycles_t and a jiffies in the same way for both add_device_randomness and add_timer_randomness, rather than using xor in one case. Finally, we unify the order of these volatile reads, always reading the more precise cycles counter, and then jiffies, so that the cycle counter is as close to the event as possible. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 05ef023f11d211ac892ec16f3a6c84f9ae2c76a1 Author: Jason A. Donenfeld Date: Thu Feb 24 23:04:56 2022 +0100 random: cleanup UUID handling commit 64276a9939ff414f2f0db38036cf4e1a0a703394 upstream. Rather than hard coding various lengths, we can use the right constants. Strings should be `char *` while buffers should be `u8 *`. Rather than have a nonsensical and unused maxlength, just remove it. Finally, use snprintf instead of sprintf, just out of good hygiene. As well, remove the old comment about returning a binary UUID via the binary sysctl syscall. That syscall was removed from the kernel in 5.5, and actually, the "uuid_strategy" function and related infrastructure for even serving it via the binary sysctl syscall was removed with 894d2491153a ("sysctl drivers: Remove dead binary sysctl support") back in 2.6.33. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 46aeaf49e28bf05d8f6db73bf23be7ffcc2633f8 Author: Jason A. Donenfeld Date: Tue Feb 22 14:01:57 2022 +0100 random: only wake up writers after zap if threshold was passed commit a3f9e8910e1584d7725ef7d5ac870920d42d0bb4 upstream. The only time that we need to wake up /dev/random writers on RNDCLEARPOOL/RNDZAPPOOL is when we're changing from a value that is greater than or equal to POOL_MIN_BITS to zero, because if we're changing from below POOL_MIN_BITS to zero, the writers are already unblocked. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4506589f545bcd4d7793462599969122c0478518 Author: Jason A. Donenfeld Date: Tue Feb 22 13:46:10 2022 +0100 random: round-robin registers as ulong, not u32 commit da3951ebdcd1cb1d5c750e08cd05aee7b0c04d9a upstream. When the interrupt handler does not have a valid cycle counter, it calls get_reg() to read a register from the irq stack, in round-robin. Currently it does this assuming that registers are 32-bit. This is _probably_ the case, and probably all platforms without cycle counters are in fact 32-bit platforms. But maybe not, and either way, it's not quite correct. This commit fixes that to deal with `unsigned long` rather than `u32`. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ed20ec409ec2d0dc90113da9d9a323c0c62c5399 Author: Jason A. Donenfeld Date: Sun Feb 13 22:48:04 2022 +0100 random: clear fast pool, crng, and batches in cpuhp bring up commit 3191dd5a1179ef0fad5a050a1702ae98b6251e8f upstream. For the irq randomness fast pool, rather than having to use expensive atomics, which were visibly the most expensive thing in the entire irq handler, simply take care of the extreme edge case of resetting count to zero in the cpuhp online handler, just after workqueues have been reenabled. This simplifies the code a bit and lets us use vanilla variables rather than atomics, and performance should be improved. As well, very early on when the CPU comes up, while interrupts are still disabled, we clear out the per-cpu crng and its batches, so that it always starts with fresh randomness. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Sultan Alsawaf Cc: Dominik Brodowski Acked-by: Sebastian Andrzej Siewior Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 75cc37f461de9f8a80aa381ee882cf70822ee4a2 Author: Jason A. Donenfeld Date: Sun Feb 13 16:17:01 2022 +0100 random: pull add_hwgenerator_randomness() declaration into random.h commit b777c38239fec5a528e59f55b379e31b1a187524 upstream. add_hwgenerator_randomness() is a function implemented and documented inside of random.c. It is the way that hardware RNGs push data into it. Therefore, it should be declared in random.h. Otherwise sparse complains with: random.c:1137:6: warning: symbol 'add_hwgenerator_randomness' was not declared. Should it be static? The alternative would be to include hw_random.h into random.c, but that wouldn't really be good for anything except slowing down compile time. Cc: Matt Mackall Cc: Theodore Ts'o Acked-by: Herbert Xu Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit f6b013eedc72c417a3e5afa1ad0204d8c38e8c29 Author: Harald Freudenberger Date: Tue Jul 11 09:36:23 2017 +0200 hwrng: remember rng chosen by user commit 10a515ddb5f19a1ff0b9882c430b4427843169f3 upstream. When a user chooses a rng source via sysfs attribute this rng should be sticky, even when other sources with better quality to register. This patch introduces a simple way to remember the user's choice. This is reflected by a new sysfs attribute file 'rng_selected' which shows if the current rng has been chosen by userspace. The new attribute file shows '1' for user selected rng and '0' otherwise. Signed-off-by: Harald Freudenberger Reviewed-by: PrasannaKumar Muralidharan Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 42802952a2725f85f7e36ee3b29593af5fe87197 Author: Harald Freudenberger Date: Tue Jul 11 09:36:22 2017 +0200 hwrng: use rng source with best quality commit 2bbb6983887fefc8026beab01198d30f47b7bd22 upstream. This patch rewoks the hwrng to always use the rng source with best entropy quality. On registation and unregistration the hwrng now tries to choose the best (= highest quality value) rng source. The handling of the internal list of registered rng sources is now always sorted by quality and the top most rng chosen. Signed-off-by: Harald Freudenberger Reviewed-by: PrasannaKumar Muralidharan Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 675d99b9f31db68bb5c23654297a8c6ff4cf11f6 Author: Corentin LABBE Date: Tue Dec 13 15:51:14 2016 +0100 hwrng: core - remove unused PFX macro commit 079840bd13f793b915f6c8e44452eeb4a0aba8ba upstream. This patch remove the unused PFX macro. Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2de01a4b424e3f9bf9d5f7aa46c7e69aabb8e183 Author: Corentin LABBE Date: Tue Dec 13 15:51:13 2016 +0100 hwrng: core - Move hwrng miscdev minor number to include/linux/miscdevice.h commit fd50d71f94fb1c8614098949db068cd4c8dbb91d upstream. This patch move the define for hwrng's miscdev minor number to include/linux/miscdevice.h. It's better that all minor number are in the same place. Rename it to HWRNG_MINOR (from RNG_MISCDEV_MINOR) in he process since no other miscdev define have MISCDEV in their name. Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 62f50335a68073b7b88aec32ef93bdf93400107e Author: Corentin LABBE Date: Tue Dec 13 15:51:11 2016 +0100 hwrng: core - Rewrite the header commit dd8014830d2b1fdf5328978ada706df3ec180c21 upstream. checkpatch have lot of complaint about header. Furthermore, the header have some offtopic/useless information. This patch rewrite a proper header. Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 836e91b6c9db8bde0cc02f14e17ad3477c7d8284 Author: Corentin LABBE Date: Tue Dec 13 15:51:10 2016 +0100 hwrng: core - rewrite better comparison to NULL commit 2a971e3b248775f808950bdc0ac75f12a2853eff upstream. This patch fix the checkpatch warning "Comparison to NULL could be written "!ptr" Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e1b293341cc3bea31b07e07f76adcbeab37ebe43 Author: Corentin LABBE Date: Tue Dec 13 15:51:09 2016 +0100 hwrng: core - do not use multiple blank lines commit 6bc17d90e62d16828d1a2113b54cfa4e04582fb6 upstream. This patch fix the checkpatch warning "Please don't use multiple blank lines" Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 86591cf983748db1ac40e4611b931fc1ef7134e6 Author: Jason A. Donenfeld Date: Sat Feb 12 23:57:38 2022 +0100 random: check for crng_init == 0 in add_device_randomness() commit 1daf2f387652bf3a7044aea042f5023b3f6b189b upstream. This has no real functional change, as crng_pre_init_inject() (and before that, crng_slow_init()) always checks for == 0, not >= 2. So correct the outer unlocked change to reflect that. Before this used crng_ready(), which was not correct. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit b64adad8495a8238fafa664c92a1271d5ea05d43 Author: Jason A. Donenfeld Date: Sat Feb 12 23:54:09 2022 +0100 random: unify early init crng load accounting commit da792c6d5f59a76c10a310c5d4c93428fd18f996 upstream. crng_fast_load() and crng_slow_load() have different semantics: - crng_fast_load() xors and accounts with crng_init_cnt. - crng_slow_load() hashes and doesn't account. However add_hwgenerator_randomness() can afford to hash (it's called from a kthread), and it should account. Additionally, ones that can afford to hash don't need to take a trylock but can take a normal lock. So, we combine these into one function, crng_pre_init_inject(), which allows us to control these in a uniform way. This will make it simpler later to simplify this all down when the time comes for that. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit b195259cb54d0bef90099c786ec14ada11c1524d Author: Jason A. Donenfeld Date: Sat Feb 12 01:26:17 2022 +0100 random: do not take pool spinlock at boot commit afba0b80b977b2a8f16234f2acd982f82710ba33 upstream. Since rand_initialize() is run while interrupts are still off and nothing else is running, we don't need to repeatedly take and release the pool spinlock, especially in the RDSEED loop. Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4c22f2c589d98d2b1dadc262ab6dcee94482b5e2 Author: Jason A. Donenfeld Date: Fri Feb 4 16:15:46 2022 +0100 random: defer fast pool mixing to worker commit 58340f8e952b613e0ead0bed58b97b05bf4743c5 upstream. On PREEMPT_RT, it's problematic to take spinlocks from hard irq handlers. We can fix this by deferring to a workqueue the dumping of the fast pool into the input pool. We accomplish this with some careful rules on fast_pool->count: - When it's incremented to >= 64, we schedule the work. - If the top bit is set, we never schedule the work, even if >= 64. - The worker is responsible for setting it back to 0 when it's done. There are two small issues around using workqueues for this purpose that we work around. The first issue is that mix_interrupt_randomness() might be migrated to another CPU during CPU hotplug. This issue is rectified by checking that it hasn't been migrated (after disabling irqs). If it has been migrated, then we set the count to zero, so that when the CPU comes online again, it can requeue the work. As part of this, we switch to using an atomic_t, so that the increment in the irq handler doesn't wipe out the zeroing if the CPU comes back online while this worker is running. The second issue is that, though relatively minor in effect, we probably want to make sure we get a consistent view of the pool onto the stack, in case it's interrupted by an irq while reading. To do this, we don't reenable irqs until after the copy. There are only 18 instructions between the cli and sti, so this is a pretty tiny window. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Jonathan Neuschäfer Acked-by: Sebastian Andrzej Siewior Reviewed-by: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ac4bbc0a55ad71ff0a4bf266acf76c8a8820cb11 Author: Tejun Heo Date: Fri Sep 16 15:49:32 2016 -0400 workqueue: make workqueue available early during boot commit 3347fa0928210d96aaa2bd6cd5a8391d5e630873 upstream. Workqueue is currently initialized in an early init call; however, there are cases where early boot code has to be split and reordered to come after workqueue initialization or the same code path which makes use of workqueues is used both before workqueue initailization and after. The latter cases have to gate workqueue usages with keventd_up() tests, which is nasty and easy to get wrong. Workqueue usages have become widespread and it'd be a lot more convenient if it can be used very early from boot. This patch splits workqueue initialization into two steps. workqueue_init_early() which sets up the basic data structures so that workqueues can be created and work items queued, and workqueue_init() which actually brings up workqueues online and starts executing queued work items. The former step can be done very early during boot once memory allocation, cpumasks and idr are initialized. The latter right after kthreads become available. This allows work item queueing and canceling from very early boot which is what most of these use cases want. * As systemd_wq being initialized doesn't indicate that workqueue is fully online anymore, update keventd_up() to test wq_online instead. The follow-up patches will get rid of all its usages and the function itself. * Flushing doesn't make sense before workqueue is fully initialized. The flush functions trigger WARN and return immediately before fully online. * Work items are never in-flight before fully online. Canceling can always succeed by skipping the flush step. * Some code paths can no longer assume to be called with irq enabled as irq is disabled during early boot. Use irqsave/restore operations instead. v2: Watchdog init, which requires timer to be running, moved from workqueue_init_early() to workqueue_init(). Signed-off-by: Tejun Heo Suggested-by: Linus Torvalds Link: http://lkml.kernel.org/r/CA+55aFx0vPuMuxn00rBSM192n-Du5uxy+4AvKa0SBSOVJeuCGg@mail.gmail.com Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 95b59efc9e94e873a3ff480a40e4f989e7acc242 Author: Jason A. Donenfeld Date: Fri Feb 11 12:29:33 2022 +0100 random: rewrite header introductory comment commit 5f75d9f3babea8ae0a2d06724656874f41d317f5 upstream. Now that we've re-documented the various sections, we can remove the outdated text here and replace it with a high-level overview. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a4ec1ffb4624e05fda415a64eea9a3fcded8da2a Author: Jason A. Donenfeld Date: Fri Feb 11 12:53:34 2022 +0100 random: group sysctl functions commit 0deff3c43206c24e746b1410f11125707ad3040e upstream. This pulls all of the sysctl-focused functions into the sixth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3a0a33652ab30e3d85516832f80b52ba8fc17fb3 Author: Jason A. Donenfeld Date: Fri Feb 11 12:53:34 2022 +0100 random: group userspace read/write functions commit a6adf8e7a605250b911e94793fd077933709ff9e upstream. This pulls all of the userspace read/write-focused functions into the fifth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 0272b2bf0fa79c7aa9b73236d269cc5a0f4d7160 Author: Jason A. Donenfeld Date: Fri Feb 11 12:53:34 2022 +0100 random: group entropy collection functions commit 92c653cf14400946f376a29b828d6af7e01f38dd upstream. This pulls all of the entropy collection-focused functions into the fourth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4c6a6224d2469415707f0c8d56131340ee2eb643 Author: Jason A. Donenfeld Date: Fri Feb 11 12:53:34 2022 +0100 random: group entropy extraction functions commit a5ed7cb1a7732ef11959332d507889fbc39ebbb4 upstream. This pulls all of the entropy extraction-focused functions into the third labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c792ab87a2b4e9c44524a4e7b10ebdc844281c4c Author: Jason A. Donenfeld Date: Fri Feb 11 12:53:34 2022 +0100 random: group initialization wait functions commit 5f1bb112006b104b3e2a1e1b39bbb9b2617581e6 upstream. This pulls all of the readiness waiting-focused functions into the first labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2c60e61f858bc338561245133e5382406af2338b Author: Jason A. Donenfeld Date: Fri Feb 11 13:41:41 2022 +0100 random: remove whitespace and reorder includes commit 87e7d5abad0cbc9312dea7f889a57d294c1a5fcc upstream. This is purely cosmetic. Future work involves figuring out which of these headers we need and which we don't. Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 64155f45cdd8b94dc2fc2108c7ce5ae5a6297696 Author: Jason A. Donenfeld Date: Fri Feb 11 12:28:33 2022 +0100 random: remove useless header comment commit 6071a6c0fba2d747742cadcbb3ba26ed756ed73b upstream. This really adds nothing at all useful. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 0b553acf403b722a09870d9f1d0e185a02af8e1d Author: Jason A. Donenfeld Date: Fri Feb 11 12:19:49 2022 +0100 random: introduce drain_entropy() helper to declutter crng_reseed() commit 246c03dd899164d0186b6d685d6387f228c28d93 upstream. In preparation for separating responsibilities, break out the entropy count management part of crng_reseed() into its own function. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 5401246cd57fae3d2aff1de46105dc5a6d35c0cf Author: Jason A. Donenfeld Date: Thu Feb 10 17:01:27 2022 +0100 random: deobfuscate irq u32/u64 contributions commit b2f408fe403800c91a49f6589d95b6759ce1b30b upstream. In the irq handler, we fill out 16 bytes differently on 32-bit and 64-bit platforms, and for 32-bit vs 64-bit cycle counters, which doesn't always correspond with the bitness of the platform. Whether or not you like this strangeness, it is a matter of fact. But it might not be a fact you well realized until now, because the code that loaded the irq info into 4 32-bit words was quite confusing. Instead, this commit makes everything explicit by having separate (compile-time) branches for 32-bit and 64-bit types. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 65e03354493db51628b4671995454967f1f80e27 Author: Jason A. Donenfeld Date: Thu Feb 10 16:43:57 2022 +0100 random: add proper SPDX header commit a07fdae346c35c6ba286af1c88e0effcfa330bf9 upstream. Convert the current license into the SPDX notation of "(GPL-2.0 OR BSD-3-Clause)". This infers GPL-2.0 from the text "ALTERNATIVELY, this product may be distributed under the terms of the GNU General Public License, in which case the provisions of the GPL are required INSTEAD OF the above restrictions" and it infers BSD-3-Clause from the verbatim BSD 3 clause license in the file. Cc: Thomas Gleixner Cc: Theodore Ts'o Cc: Dominik Brodowski Reviewed-by: Greg Kroah-Hartman Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 11e8da38f351858c326873ec8493b163eec4fc5c Author: Jason A. Donenfeld Date: Thu Feb 10 16:40:44 2022 +0100 random: remove unused tracepoints commit 14c174633f349cb41ea90c2c0aaddac157012f74 upstream. These explicit tracepoints aren't really used and show sign of aging. It's work to keep these up to date, and before I attempted to keep them up to date, they weren't up to date, which indicates that they're not really used. These days there are better ways of introspecting anyway. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 0c8771425514e7379227c6d31717cd1ee105b80f Author: Jason A. Donenfeld Date: Thu Feb 10 16:35:24 2022 +0100 random: remove ifdef'd out interrupt bench commit 95e6060c20a7f5db60163274c5222a725ac118f9 upstream. With tools like kbench9000 giving more finegrained responses, and this basically never having been used ever since it was initially added, let's just get rid of this. There *is* still work to be done on the interrupt handler, but this really isn't the way it's being developed. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9d436efe2e237e62fd56160700f3d54fed04b645 Author: Jason A. Donenfeld Date: Wed Feb 9 22:46:48 2022 +0100 random: tie batched entropy generation to base_crng generation commit 0791e8b655cc373718f0f58800fdc625a3447ac5 upstream. Now that we have an explicit base_crng generation counter, we don't need a separate one for batched entropy. Rather, we can just move the generation forward every time we change crng_init state or update the base_crng key. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c1fc9536d031aa4fe7a39f6e2cecea0476fabe2a Author: Jason A. Donenfeld Date: Wed Feb 9 18:42:13 2022 +0100 random: zero buffer after reading entropy from userspace commit 7b5164fb1279bf0251371848e40bae646b59b3a8 upstream. This buffer may contain entropic data that shouldn't stick around longer than needed, so zero out the temporary buffer at the end of write_pool(). Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2ae2d53518f272f270f4934b488bb20c84cc355b Author: Jason A. Donenfeld Date: Mon Feb 7 23:37:13 2022 +0100 random: remove outdated INT_MAX >> 6 check in urandom_read() commit 434537ae54ad37e93555de21b6ac8133d6d773a9 upstream. In 79a8468747c5 ("random: check for increase of entropy_count because of signed conversion"), a number of checks were added around what values were passed to account(), because account() was doing fancy fixed point fractional arithmetic, and a user had some ability to pass large values directly into it. One of things in that commit was limiting those values to INT_MAX >> 6. The first >> 3 was for bytes to bits, and the next >> 3 was for bits to 1/8 fractional bits. However, for several years now, urandom reads no longer touch entropy accounting, and so this check serves no purpose. The current flow is: urandom_read_nowarn()-->get_random_bytes_user()-->chacha20_block() Of course, we don't want that size_t to be truncated when adding it into the ssize_t. But we arrive at urandom_read_nowarn() in the first place either via ordinary fops, which limits reads to MAX_RW_COUNT, or via getrandom() which limits reads to INT_MAX. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 508bad0b719bb1e932490df792b31203b69360fe Author: Jason A. Donenfeld Date: Tue Feb 8 19:23:17 2022 +0100 random: use hash function for crng_slow_load() commit 66e4c2b9541503d721e936cc3898c9f25f4591ff upstream. Since we have a hash function that's really fast, and the goal of crng_slow_load() is reportedly to "touch all of the crng's state", we can just hash the old state together with the new state and call it a day. This way we dont need to reason about another LFSR or worry about various attacks there. This code is only ever used at early boot and then never again. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a0a276481b4ded8edb4b2b5caade5ee1c1d29aad Author: Jason A. Donenfeld Date: Wed Feb 9 01:56:35 2022 +0100 random: absorb fast pool into input pool after fast load commit c30c575db4858f0bbe5e315ff2e529c782f33a1f upstream. During crng_init == 0, we never credit entropy in add_interrupt_ randomness(), but instead dump it directly into the primary_crng. That's fine, except for the fact that we then wind up throwing away that entropy later when we switch to extracting from the input pool and xoring into (and later in this series overwriting) the primary_crng key. The two other early init sites -- add_hwgenerator_randomness()'s use crng_fast_load() and add_device_ randomness()'s use of crng_slow_load() -- always additionally give their inputs to the input pool. But not add_interrupt_randomness(). This commit fixes that shortcoming by calling mix_pool_bytes() after crng_fast_load() in add_interrupt_randomness(). That's partially verboten on PREEMPT_RT, where it implies taking spinlock_t from an IRQ handler. But this also only happens during early boot and then never again after that. Plus it's a trylock so it has the same considerations as calling crng_fast_load(), which we're already using. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Suggested-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit cc7535a0208a10ce56eaea687b36d3f4ddef9a74 Author: Jason A. Donenfeld Date: Tue Feb 8 13:00:11 2022 +0100 random: do not xor RDRAND when writing into /dev/random commit 91c2afca290ed3034841c8c8532e69ed9e16cf34 upstream. Continuing the reasoning of "random: ensure early RDSEED goes through mixer on init", we don't want RDRAND interacting with anything without going through the mixer function, as a backdoored CPU could presumably cancel out data during an xor, which it'd have a harder time doing when being forced through a cryptographic hash function. There's actually no need at all to be calling RDRAND in write_pool(), because before we extract from the pool, we always do so with 32 bytes of RDSEED hashed in at that stage. Xoring at this stage is needless and introduces a minor liability. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4e575837e4348cc5f9f0892615e1c414af399380 Author: Jason A. Donenfeld Date: Tue Feb 8 12:44:28 2022 +0100 random: ensure early RDSEED goes through mixer on init commit a02cf3d0dd77244fd5333ac48d78871de459ae6d upstream. Continuing the reasoning of "random: use RDSEED instead of RDRAND in entropy extraction" from this series, at init time we also don't want to be xoring RDSEED directly into the crng. Instead it's safer to put it into our entropy collector and then re-extract it, so that it goes through a hash function with preimage resistance. As a matter of hygiene, we also order these now so that the RDSEED byte are hashed in first, followed by the bytes that are likely more predictable (e.g. utsname()). Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 6bc282bedfc988db493ee69232ecb35fc0f91226 Author: Jason A. Donenfeld Date: Tue Feb 8 12:40:14 2022 +0100 random: inline leaves of rand_initialize() commit 8566417221fcec51346ec164e920dacb979c6b5f upstream. This is a preparatory commit for the following one. We simply inline the various functions that rand_initialize() calls that have no other callers. The compiler was doing this anyway before. Doing this will allow us to reorganize this after. We can then move the trust_cpu and parse_trust_cpu definitions a bit closer to where they're actually used, which makes the code easier to read. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a07ddc05f919c1b26d5ad60cd3befb0a1543c795 Author: Jason A. Donenfeld Date: Tue Feb 8 12:18:33 2022 +0100 random: use RDSEED instead of RDRAND in entropy extraction commit 28f425e573e906a4c15f8392cc2b1561ef448595 upstream. When /dev/random was directly connected with entropy extraction, without any expansion stage, extract_buf() was called for every 10 bytes of data read from /dev/random. For that reason, RDRAND was used rather than RDSEED. At the same time, crng_reseed() was still only called every 5 minutes, so there RDSEED made sense. Those olden days were also a time when the entropy collector did not use a cryptographic hash function, which meant most bets were off in terms of real preimage resistance. For that reason too it didn't matter _that_ much whether RDSEED was mixed in before or after entropy extraction; both choices were sort of bad. But now we have a cryptographic hash function at work, and with that we get real preimage resistance. We also now only call extract_entropy() every 5 minutes, rather than every 10 bytes. This allows us to do two important things. First, we can switch to using RDSEED in extract_entropy(), as Dominik suggested. Second, we can ensure that RDSEED input always goes into the cryptographic hash function with other things before being used directly. This eliminates a category of attacks in which the CPU knows the current state of the crng and knows that we're going to xor RDSEED into it, and so it computes a malicious RDSEED. By going through our hash function, it would require the CPU to compute a preimage on the fly, which isn't going to happen. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Suggested-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9924c212e0b20a2c249bfed802acca72ac9027b4 Author: Dominik Brodowski Date: Sat Feb 5 11:34:57 2022 +0100 random: fix locking in crng_fast_load() commit 7c2fe2b32bf76441ff5b7a425b384e5f75aa530a upstream. crng_init is protected by primary_crng->lock, so keep holding that lock when incrementing crng_init from 0 to 1 in crng_fast_load(). The call to pr_notice() can wait until the lock is released; this code path cannot be reached twice, as crng_fast_load() aborts early if crng_init > 0. Signed-off-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 90ebbb477d2d2b679e6e2d8b3fc837c171c117c9 Author: Jason A. Donenfeld Date: Fri Jan 28 23:29:45 2022 +0100 random: remove batched entropy locking commit 77760fd7f7ae3dfd03668204e708d1568d75447d upstream. Rather than use spinlocks to protect batched entropy, we can instead disable interrupts locally, since we're dealing with per-cpu data, and manage resets with a basic generation counter. At the same time, we can't quite do this on PREEMPT_RT, where we still want spinlocks-as- mutexes semantics. So we use a local_lock_t, which provides the right behavior for each. Because this is a per-cpu lock, that generation counter is still doing the necessary CPU-to-CPU communication. This should improve performance a bit. It will also fix the linked splat that Jonathan received with a PROVE_RAW_LOCK_NESTING=y. Reviewed-by: Sebastian Andrzej Siewior Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Suggested-by: Andy Lutomirski Reported-by: Jonathan Neuschäfer Tested-by: Jonathan Neuschäfer Link: https://lore.kernel.org/lkml/YfMa0QgsjCVdRAvJ@latitude/ Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a2b4d5b6a3b69ce46a3b823c0bc0121eb662d728 Author: Eric Biggers Date: Fri Feb 4 14:17:33 2022 -0800 random: remove use_input_pool parameter from crng_reseed() commit 5d58ea3a31cc98b9fa563f6921d3d043bf0103d1 upstream. The primary_crng is always reseeded from the input_pool, while the NUMA crngs are always reseeded from the primary_crng. Remove the redundant 'use_input_pool' parameter from crng_reseed() and just directly check whether the crng is the primary_crng. Signed-off-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e7a94af6be9fbded0bffeb8fbba80814e84ed668 Author: Jason A. Donenfeld Date: Fri Feb 4 01:45:53 2022 +0100 random: make credit_entropy_bits() always safe commit a49c010e61e1938be851f5e49ac219d49b704103 upstream. This is called from various hwgenerator drivers, so rather than having one "safe" version for userspace and one "unsafe" version for the kernel, just make everything safe; the checks are cheap and sensible to have anyway. Reported-by: Sultan Alsawaf Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 75ec1afb5c6889aa8fe8dde0caaacc23a119401d Author: Jason A. Donenfeld Date: Sat Feb 5 14:00:58 2022 +0100 random: always wake up entropy writers after extraction commit 489c7fc44b5740d377e8cfdbf0851036e493af00 upstream. Now that POOL_BITS == POOL_MIN_BITS, we must unconditionally wake up entropy writers after every extraction. Therefore there's no point of write_wakeup_threshold, so we can move it to the dustbin of unused compatibility sysctls. While we're at it, we can fix a small comparison where we were waking up after <= min rather than < min. Cc: Theodore Ts'o Suggested-by: Eric Biggers Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 316d312ffd3820e06cdd54c51d0ed094489d4dbb Author: Jason A. Donenfeld Date: Thu Feb 3 13:28:06 2022 +0100 random: use linear min-entropy accumulation crediting commit c570449094844527577c5c914140222cb1893e3f upstream. 30e37ec516ae ("random: account for entropy loss due to overwrites") assumed that adding new entropy to the LFSR pool probabilistically cancelled out old entropy there, so entropy was credited asymptotically, approximating Shannon entropy of independent sources (rather than a stronger min-entropy notion) using 1/8th fractional bits and replacing a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly underestimate it. This wasn't superb, but it was perhaps better than nothing, so that's what was done. Which entropy specifically was being cancelled out and how much precisely each time is hard to tell, though as I showed with the attack code in my previous commit, a motivated adversary with sufficient information can actually cancel out everything. Since we're no longer using an LFSR for entropy accumulation, this probabilistic cancellation is no longer relevant. Rather, we're now using a computational hash function as the accumulator and we've switched to working in the random oracle model, from which we can now revisit the question of min-entropy accumulation, which is done in detail in . Consider a long input bit string that is built by concatenating various smaller independent input bit strings. Each one of these inputs has a designated min-entropy, which is what we're passing to credit_entropy_bits(h). When we pass the concatenation of these to a random oracle, it means that an adversary trying to receive back the same reply as us would need to become certain about each part of the concatenated bit string we passed in, which means becoming certain about all of those h values. That means we can estimate the accumulation by simply adding up the h values in calls to credit_entropy_bits(h); there's no probabilistic cancellation at play like there was said to be for the LFSR. Incidentally, this is also what other entropy accumulators based on computational hash functions do as well. So this commit replaces credit_entropy_bits(h) with essentially `total = min(POOL_BITS, total + h)`, done with a cmpxchg loop as before. What if we're wrong and the above is nonsense? It's not, but let's assume we don't want the actual _behavior_ of the code to change much. Currently that behavior is not extracting from the input pool until it has 128 bits of entropy in it. With the old algorithm, we'd hit that magic 128 number after roughly 256 calls to credit_entropy_bits(1). So, we can retain more or less the old behavior by waiting to extract from the input pool until it hits 256 bits of entropy using the new code. For people concerned about this change, it means that there's not that much practical behavioral change. And for folks actually trying to model the behavior rigorously, it means that we have an even higher margin against attacks. Cc: Theodore Ts'o Cc: Dominik Brodowski Cc: Greg Kroah-Hartman Reviewed-by: Eric Biggers Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 6820fc47245569f4dcce5a4677f065a6c0598033 Author: Jason A. Donenfeld Date: Wed Feb 2 13:30:03 2022 +0100 random: simplify entropy debiting commit 9c07f57869e90140080cfc282cc628d123e27704 upstream. Our pool is 256 bits, and we only ever use all of it or don't use it at all, which is decided by whether or not it has at least 128 bits in it. So we can drastically simplify the accounting and cmpxchg loop to do exactly this. While we're at it, we move the minimum bit size into a constant so it can be shared between the two places where it matters. The reason we want any of this is for the case in which an attacker has compromised the current state, and then bruteforces small amounts of entropy added to it. By demanding a particular minimum amount of entropy be present before reseeding, we make that bruteforcing difficult. Note that this rationale no longer includes anything about /dev/random blocking at the right moment, since /dev/random no longer blocks (except for at ~boot), but rather uses the crng. In a former life, /dev/random was different and therefore required a more nuanced account(), but this is no longer. Behaviorally, nothing changes here. This is just a simplification of the code. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit bc5f6f0670c5618dcf4fc16482a5dc63be64eebf Author: Jason A. Donenfeld Date: Sun Jan 16 14:23:10 2022 +0100 random: use computational hash for entropy extraction commit 6e8ec2552c7d13991148e551e3325a624d73fac6 upstream. The current 4096-bit LFSR used for entropy collection had a few desirable attributes for the context in which it was created. For example, the state was huge, which meant that /dev/random would be able to output quite a bit of accumulated entropy before blocking. It was also, in its time, quite fast at accumulating entropy byte-by-byte, which matters given the varying contexts in which mix_pool_bytes() is called. And its diffusion was relatively high, which meant that changes would ripple across several words of state rather quickly. However, it also suffers from a few security vulnerabilities. In particular, inputs learned by an attacker can be undone, but moreover, if the state of the pool leaks, its contents can be controlled and entirely zeroed out. I've demonstrated this attack with this SMT2 script, , which Boolector/CaDiCal solves in a matter of seconds on a single core of my laptop, resulting in little proof of concept C demonstrators such as . For basically all recent formal models of RNGs, these attacks represent a significant cryptographic flaw. But how does this manifest practically? If an attacker has access to the system to such a degree that he can learn the internal state of the RNG, arguably there are other lower hanging vulnerabilities -- side-channel, infoleak, or otherwise -- that might have higher priority. On the other hand, seed files are frequently used on systems that have a hard time generating much entropy on their own, and these seed files, being files, often leak or are duplicated and distributed accidentally, or are even seeded over the Internet intentionally, where their contents might be recorded or tampered with. Seen this way, an otherwise quasi-implausible vulnerability is a bit more practical than initially thought. Another aspect of the current mix_pool_bytes() function is that, while its performance was arguably competitive for the time in which it was created, it's no longer considered so. This patch improves performance significantly: on a high-end CPU, an i7-11850H, it improves performance of mix_pool_bytes() by 225%, and on a low-end CPU, a Cortex-A7, it improves performance by 103%. This commit replaces the LFSR of mix_pool_bytes() with a straight- forward cryptographic hash function, BLAKE2s, which is already in use for pool extraction. Universal hashing with a secret seed was considered too, something along the lines of , but the requirement for a secret seed makes for a chicken & egg problem. Instead we go with a formally proven scheme using a computational hash function, described in sections 5.1, 6.4, and B.1.8 of . BLAKE2s outputs 256 bits, which should give us an appropriate amount of min-entropy accumulation, and a wide enough margin of collision resistance against active attacks. mix_pool_bytes() becomes a simple call to blake2s_update(), for accumulation, while the extraction step becomes a blake2s_final() to generate a seed, with which we can then do a HKDF-like or BLAKE2X-like expansion, the first part of which we fold back as an init key for subsequent blake2s_update()s, and the rest we produce to the caller. This then is provided to our CRNG like usual. In that expansion step, we make opportunistic use of 32 bytes of RDRAND output, just as before. We also always reseed the crng with 32 bytes, unconditionally, or not at all, rather than sometimes with 16 as before, as we don't win anything by limiting beyond the 16 byte threshold. Going for a hash function as an entropy collector is a conservative, proven approach. The result of all this is a much simpler and much less bespoke construction than what's there now, which not only plugs a vulnerability but also improves performance considerably. Cc: Theodore Ts'o Cc: Dominik Brodowski Reviewed-by: Eric Biggers Reviewed-by: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7b5e3c485bf107313f78a85dfe58e3c7dce29ac7 Author: Dominik Brodowski Date: Sun Jan 30 22:03:20 2022 +0100 random: only call crng_finalize_init() for primary_crng commit 9d5505f1eebeca778074a0260ed077fd85f8792c upstream. crng_finalize_init() returns instantly if it is called for another pool than primary_crng. The test whether crng_finalize_init() is still required can be moved to the relevant caller in crng_reseed(), and crng_need_final_init can be reset to false if crng_finalize_init() is called with workqueues ready. Then, no previous callsite will call crng_finalize_init() unless it is needed, and we can get rid of the superfluous function parameter. Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit b68c62103908472212ca250b816c64f90422f72b Author: Dominik Brodowski Date: Sun Jan 30 22:03:19 2022 +0100 random: access primary_pool directly rather than through pointer commit ebf7606388732ecf2821ca21087e9446cb4a5b57 upstream. Both crng_initialize_primary() and crng_init_try_arch_early() are only called for the primary_pool. Accessing it directly instead of through a function parameter simplifies the code. Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 1e78b9755998143c83f4b1329246c2d5d2cbaf97 Author: Dominik Brodowski Date: Tue Jan 25 21:14:57 2022 +0100 random: continually use hwgenerator randomness commit c321e907aa4803d562d6e70ebed9444ad082f953 upstream. The rngd kernel thread may sleep indefinitely if the entropy count is kept above random_write_wakeup_bits by other entropy sources. To make best use of multiple sources of randomness, mix entropy from hardware RNGs into the pool at least once within CRNG_RESEED_INTERVAL. Cc: Herbert Xu Cc: Jason A. Donenfeld Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e96b8df348a222f67fe4016d914ba9449be14fd1 Author: Jason A. Donenfeld Date: Mon Jan 17 18:43:02 2022 +0100 random: simplify arithmetic function flow in account() commit a254a0e4093fce8c832414a83940736067eed515 upstream. Now that have_bytes is never modified, we can simplify this function. First, we move the check for negative entropy_count to be first. That ensures that subsequent reads of this will be non-negative. Then, have_bytes and ibytes can be folded into their one use site in the min_t() function. Suggested-by: Dominik Brodowski Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 35a490a35c32d92dc90a8f1951b22310c086e9e8 Author: Jason A. Donenfeld Date: Sat Jan 15 14:40:04 2022 +0100 random: access input_pool_data directly rather than through pointer commit 6c0eace6e1499712583b6ee62d95161e8b3449f5 upstream. This gets rid of another abstraction we no longer need. It would be nice if we could instead make pool an array rather than a pointer, but the latent entropy plugin won't be able to do its magic in that case. So instead we put all accesses to the input pool's actual data through the input_pool_data array directly. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 5501511b3ced6cc917d4e2e2b3590b6181eff541 Author: Jason A. Donenfeld Date: Thu Jan 13 18:18:48 2022 +0100 random: cleanup fractional entropy shift constants commit 18263c4e8e62f7329f38f5eadc568751242ca89c upstream. The entropy estimator is calculated in terms of 1/8 bits, which means there are various constants where things are shifted by 3. Move these into our pool info enum with the other relevant constants. While we're at it, move an English assertion about sizes into a proper BUILD_BUG_ON so that the compiler can ensure this invariant. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit dbc08ad3ef5eb818ae290f7e26a2c8f4e342945c Author: Jason A. Donenfeld Date: Fri Jan 14 16:48:35 2022 +0100 random: prepend remaining pool constants with POOL_ commit b3d51c1f542113342ddfbf6007e38a684b9dbec9 upstream. The other pool constants are prepended with POOL_, but not these last ones. Rename them. This will then let us move them into the enum in the following commit. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d1c2d8ed74500d108733ffe0c9d6cd8426f9e1a1 Author: Jason A. Donenfeld Date: Thu Jan 13 16:11:21 2022 +0100 random: de-duplicate INPUT_POOL constants commit 5b87adf30f1464477169a1d653e9baf8c012bbfe upstream. We already had the POOL_* constants, so deduplicate the older INPUT_POOL ones. As well, fold EXTRACT_SIZE into the poolinfo enum, since it's related. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2fe1e7fe8627f54bf0947f93d9640ad380aac493 Author: Jason A. Donenfeld Date: Thu Jan 13 15:51:06 2022 +0100 random: remove unused OUTPUT_POOL constants commit 0f63702718c91d89c922081ac1e6baeddc2d8b1a upstream. We no longer have an output pool. Rather, we have just a wakeup bits threshold for /dev/random reads, presumably so that processes don't hang. This value, random_write_wakeup_bits, is configurable anyway. So all the no longer usefully named OUTPUT_POOL constants were doing was setting a reasonable default for random_write_wakeup_bits. This commit gets rid of the constants and just puts it all in the default value of random_write_wakeup_bits. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 17b4b12951936944600836f4a912059a7ebd2198 Author: Jason A. Donenfeld Date: Wed Jan 12 17:18:08 2022 +0100 random: rather than entropy_store abstraction, use global commit 90ed1e67e896cc8040a523f8428fc02f9b164394 upstream. Originally, the RNG used several pools, so having things abstracted out over a generic entropy_store object made sense. These days, there's only one input pool, and then an uneven mix of usage via the abstraction and usage via &input_pool. Rather than this uneasy mixture, just get rid of the abstraction entirely and have things always use the global. This simplifies the code and makes reading it a bit easier. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 1d77add4f6007afa0b2b4a84d06f5fc14a6b8579 Author: Linus Torvalds Date: Sat Sep 28 16:53:52 2019 -0700 random: try to actively add entropy rather than passively wait for it commit 50ee7529ec4500c88f8664560770a7a1b65db72b upstream. For 5.3 we had to revert a nice ext4 IO pattern improvement, because it caused a bootup regression due to lack of entropy at bootup together with arguably broken user space that was asking for secure random numbers when it really didn't need to. See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug"). This aims to solve the issue by actively generating entropy noise using the CPU cycle counter when waiting for the random number generator to initialize. This only works when you have a high-frequency time stamp counter available, but that's the case on all modern x86 CPU's, and on most other modern CPU's too. What we do is to generate jitter entropy from the CPU cycle counter under a somewhat complex load: calling the scheduler while also guaranteeing a certain amount of timing noise by also triggering a timer. I'm sure we can tweak this, and that people will want to look at other alternatives, but there's been a number of papers written on jitter entropy, and this should really be fairly conservative by crediting one bit of entropy for every timer-induced jump in the cycle counter. Not because the timer itself would be all that unpredictable, but because the interaction between the timer and the loop is going to be. Even if (and perhaps particularly if) the timer actually happens on another CPU, the cacheline interaction between the loop that reads the cycle counter and the timer itself firing is going to add perturbations to the cycle counter values that get mixed into the entropy pool. As Thomas pointed out, with a modern out-of-order CPU, even quite simple loops show a fair amount of hard-to-predict timing variability even in the absense of external interrupts. But this tries to take that further by actually having a fairly complex interaction. This is not going to solve the entropy issue for architectures that have no CPU cycle counter, but it's not clear how (and if) that is solvable, and the hardware in question is largely starting to be irrelevant. And by doing this we can at least avoid some of the even more contentious approaches (like making the entropy waiting time out in order to avoid the possibly unbounded waiting). Cc: Ahmed Darwish Cc: Thomas Gleixner Cc: Theodore Ts'o Cc: Nicholas Mc Guire Cc: Andy Lutomirski Cc: Kees Cook Cc: Willy Tarreau Cc: Alexander E. Patrakov Cc: Lennart Poettering Cc: Noah Meyerhans Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit fbf83b78592a45c236b3b5605978f32d9e7a251a Author: Jason A. Donenfeld Date: Wed Jan 12 15:28:21 2022 +0100 random: remove unused extract_entropy() reserved argument commit 8b2d953b91e7f60200c24067ab17b77cc7bfd0d4 upstream. This argument is always set to zero, as a result of us not caring about keeping a certain amount reserved in the pool these days. So just remove it and cleanup the function signatures. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 809e9da9c1ca1b09f8e60f61ec806c56d1e6697f Author: Jason A. Donenfeld Date: Wed Jan 12 15:22:30 2022 +0100 random: remove incomplete last_data logic commit a4bfa9b31802c14ff5847123c12b98d5e36b3985 upstream. There were a few things added under the "if (fips_enabled)" banner, which never really got completed, and the FIPS people anyway are choosing a different direction. Rather than keep around this halfbaked code, get rid of it so that we can focus on a single design of the RNG rather than two designs. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ff2063de8a217096bc90c1d9ad1a867b39381d26 Author: Jason A. Donenfeld Date: Sun Jan 9 17:48:58 2022 +0100 random: cleanup integer types commit d38bb0853589c939573ea50e9cb64f733e0e273d upstream. Rather than using the userspace type, __uXX, switch to using uXX. And rather than using variously chosen `char *` or `unsigned char *`, use `u8 *` uniformly for things that aren't strings, in the case where we are doing byte-by-byte traversal. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 70e975501f90683c0fa7e4c64bb45be25cd9b3d3 Author: Eric Biggers Date: Tue Sep 11 20:05:10 2018 -0700 crypto: chacha20 - Fix chacha20_block() keystream alignment (again) [ Upstream commit a5e9f557098e54af44ade5d501379be18435bfbf ] In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for chacha20_block()"), I had missed that chacha20_block() can be called directly on the buffer passed to get_random_bytes(), which can have any alignment. So, while my commit didn't break anything, it didn't fully solve the alignment problems. Revert my solution and just update chacha20_block() to use put_unaligned_le32(), so the output buffer need not be aligned. This is simpler, and on many CPUs it's the same speed. But, I kept the 'tmp' buffers in extract_crng_user() and _get_random_bytes() 4-byte aligned, since that alignment is actually needed for _crng_backtrack_protect() too. Reported-by: Stephan Müller Cc: Theodore Ts'o Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 61de33cef583ef08159fdc40ace4dd3c552c07c7 Author: Jason A. Donenfeld Date: Sun Jan 9 17:32:02 2022 +0100 random: cleanup poolinfo abstraction commit 91ec0fe138f107232cb36bc6112211db37cb5306 upstream. Now that we're only using one polynomial, we can cleanup its representation into constants, instead of passing around pointers dynamically to select different polynomials. This improves the codegen and makes the code a bit more straightforward. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c039ecda6405c6750e4ad9fa0c2bcd83868f1cd5 Author: Schspa Shi Date: Fri Jan 14 16:12:16 2022 +0800 random: fix typo in comments commit c0a8a61e7abbf66729687ee63659ee25983fbb1e upstream. s/or/for Signed-off-by: Schspa Shi Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit cd120d1dfdefce57cc9fecea8c6c66910a3cbd9c Author: Jann Horn Date: Mon Jan 3 16:59:31 2022 +0100 random: don't reset crng_init_cnt on urandom_read() commit 6c8e11e08a5b74bb8a5cdd5cbc1e5143df0fba72 upstream. At the moment, urandom_read() (used for /dev/urandom) resets crng_init_cnt to zero when it is called at crng_init<2. This is inconsistent: We do it for /dev/urandom reads, but not for the equivalent getrandom(GRND_INSECURE). (And worse, as Jason pointed out, we're only doing this as long as maxwarn>0.) crng_init_cnt is only read in crng_fast_load(); it is relevant at crng_init==0 for determining when to switch to crng_init==1 (and where in the RNG state array to write). As far as I understand: - crng_init==0 means "we have nothing, we might just be returning the same exact numbers on every boot on every machine, we don't even have non-cryptographic randomness; we should shove every bit of entropy we can get into the RNG immediately" - crng_init==1 means "well we have something, it might not be cryptographic, but at least we're not gonna return the same data every time or whatever, it's probably good enough for TCP and ASLR and stuff; we now have time to build up actual cryptographic entropy in the input pool" - crng_init==2 means "this is supposed to be cryptographically secure now, but we'll keep adding more entropy just to be sure". The current code means that if someone is pulling data from /dev/urandom fast enough at crng_init==0, we'll keep resetting crng_init_cnt, and we'll never make forward progress to crng_init==1. It seems to be intended to prevent an attacker from bruteforcing the contents of small individual RNG inputs on the way from crng_init==0 to crng_init==1, but that's misguided; crng_init==1 isn't supposed to provide proper cryptographic security anyway, RNG users who care about getting secure RNG output have to wait until crng_init==2. This code was inconsistent, and it probably made things worse - just get rid of it. Signed-off-by: Jann Horn Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit afec91541862d6e168696eab1de943184b4963b3 Author: Jason A. Donenfeld Date: Thu Dec 30 17:50:52 2021 +0100 random: avoid superfluous call to RDRAND in CRNG extraction commit 2ee25b6968b1b3c66ffa408de23d023c1bce81cf upstream. RDRAND is not fast. RDRAND is actually quite slow. We've known this for a while, which is why functions like get_random_u{32,64} were converted to use batching of our ChaCha-based CRNG instead. Yet CRNG extraction still includes a call to RDRAND, in the hot path of every call to get_random_bytes(), /dev/urandom, and getrandom(2). This call to RDRAND here seems quite superfluous. CRNG is already extracting things based on a 256-bit key, based on good entropy, which is then reseeded periodically, updated, backtrack-mutated, and so forth. The CRNG extraction construction is something that we're already relying on to be secure and solid. If it's not, that's a serious problem, and it's unlikely that mixing in a measly 32 bits from RDRAND is going to alleviate things. And in the case where the CRNG doesn't have enough entropy yet, we're already initializing the ChaCha key row with RDRAND in crng_init_try_arch_early(). Removing the call to RDRAND improves performance on an i7-11850H by 370%. In other words, the vast majority of the work done by extract_crng() prior to this commit was devoted to fetching 32 bits of RDRAND. Reviewed-by: Theodore Ts'o Acked-by: Ard Biesheuvel Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e6ae8dda9700dce312dc1f1e4835b9f360edc649 Author: Dominik Brodowski Date: Fri Dec 31 09:26:08 2021 +0100 random: early initialization of ChaCha constants commit 96562f286884e2db89c74215b199a1084b5fb7f7 upstream. Previously, the ChaCha constants for the primary pool were only initialized in crng_initialize_primary(), called by rand_initialize(). However, some randomness is actually extracted from the primary pool beforehand, e.g. by kmem_cache_create(). Therefore, statically initialize the ChaCha constants for the primary pool. Cc: Herbert Xu Cc: "David S. Miller" Cc: Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d462ff7fed56ce83704715fe043366ad63febbbc Author: Eric Biggers Date: Sun Mar 21 22:13:47 2021 -0700 random: initialize ChaCha20 constants with correct endianness commit a181e0fdb2164268274453b5b291589edbb9b22d upstream. On big endian CPUs, the ChaCha20-based CRNG is using the wrong endianness for the ChaCha20 constants. This doesn't matter cryptographically, but technically it means it's not ChaCha20 anymore. Fix it to always use the standard constants. Cc: linux-crypto@vger.kernel.org Cc: Andy Lutomirski Cc: Jann Horn Cc: Theodore Ts'o Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 598014d6b8f7edc9bc65552649642afded7ea2ce Author: Jason A. Donenfeld Date: Thu Dec 30 15:59:26 2021 +0100 random: use IS_ENABLED(CONFIG_NUMA) instead of ifdefs commit 7b87324112df2e1f9b395217361626362dcfb9fb upstream. Rather than an awkward combination of ifdefs and __maybe_unused, we can ensure more source gets parsed, regardless of the configuration, by using IS_ENABLED for the CONFIG_NUMA conditional code. This makes things cleaner and easier to follow. I've confirmed that on !CONFIG_NUMA, we don't wind up with excess code by accident; the generated object file is the same. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 17aac85e25d2296cfc72aad008a6ce357e331cac Author: Dominik Brodowski Date: Wed Dec 29 22:10:07 2021 +0100 random: harmonize "crng init done" messages commit 161212c7fd1d9069b232785c75492e50941e2ea8 upstream. We print out "crng init done" for !TRUST_CPU, so we should also print out the same for TRUST_CPU. Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e565f3e7530a45346d55872201f7062f0adf6b13 Author: Jason A. Donenfeld Date: Wed Dec 29 22:10:06 2021 +0100 random: mix bootloader randomness into pool commit 57826feeedb63b091f807ba8325d736775d39afd upstream. If we're trusting bootloader randomness, crng_fast_load() is called by add_hwgenerator_randomness(), which sets us to crng_init==1. However, usually it is only called once for an initial 64-byte push, so bootloader entropy will not mix any bytes into the input pool. So it's conceivable that crng_init==1 when crng_initialize_primary() is called later, but then the input pool is empty. When that happens, the crng state key will be overwritten with extracted output from the empty input pool. That's bad. In contrast, if we're not trusting bootloader randomness, we call crng_slow_load() *and* we call mix_pool_bytes(), so that later crng_initialize_primary() isn't drawing on nothing. In order to prevent crng_initialize_primary() from extracting an empty pool, have the trusted bootloader case mirror that of the untrusted bootloader case, mixing the input into the pool. [linux@dominikbrodowski.net: rewrite commit message] Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 1d35c20cf0233eac135bef624ad9429dd7b20dd8 Author: Jason A. Donenfeld Date: Wed Dec 29 22:10:04 2021 +0100 random: do not re-init if crng_reseed completes before primary init commit 9c3ddde3f811aabbb83778a2a615bf141b4909ef upstream. If the bootloader supplies sufficient material and crng_reseed() is called very early on, but not too early that wqs aren't available yet, then we might transition to crng_init==2 before rand_initialize()'s call to crng_initialize_primary() made. Then, when crng_initialize_primary() is called, if we're trusting the CPU's RDRAND instructions, we'll needlessly reinitialize the RNG and emit a message about it. This is mostly harmless, as numa_crng_init() will allocate and then free what it just allocated, and excessive calls to invalidate_batched_entropy() aren't so harmful. But it is funky and the extra message is confusing, so avoid the re-initialization all together by checking for crng_init < 2 in crng_initialize_primary(), just as we already do in crng_reseed(). Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 242dc3744148b1f930050140606c7e170b4704f7 Author: Jason A. Donenfeld Date: Fri Dec 24 19:17:58 2021 +0100 random: do not sign extend bytes for rotation when mixing commit 0d9488ffbf2faddebc6bac055bfa6c93b94056a3 upstream. By using `char` instead of `unsigned char`, certain platforms will sign extend the byte when `w = rol32(*bytes++, input_rotate)` is called, meaning that bit 7 is overrepresented when mixing. This isn't a real problem (unless the mixer itself is already broken) since it's still invertible, but it's not quite correct either. Fix this by using an explicit unsigned type. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit dbb2c9ca7c50fdf048523b2aab9c46ed2f3ca03f Author: Jason A. Donenfeld Date: Tue Dec 21 16:31:27 2021 +0100 random: use BLAKE2s instead of SHA1 in extraction commit 9f9eff85a008b095eafc5f4ecbaf5aca689271c1 upstream. This commit addresses one of the lower hanging fruits of the RNG: its usage of SHA1. BLAKE2s is generally faster, and certainly more secure, than SHA1, which has [1] been [2] really [3] very [4] broken [5]. Additionally, the current construction in the RNG doesn't use the full SHA1 function, as specified, and allows overwriting the IV with RDRAND output in an undocumented way, even in the case when RDRAND isn't set to "trusted", which means potential malicious IV choices. And its short length means that keeping only half of it secret when feeding back into the mixer gives us only 2^80 bits of forward secrecy. In other words, not only is the choice of hash function dated, but the use of it isn't really great either. This commit aims to fix both of these issues while also keeping the general structure and semantics as close to the original as possible. Specifically: a) Rather than overwriting the hash IV with RDRAND, we put it into BLAKE2's documented "salt" and "personal" fields, which were specifically created for this type of usage. b) Since this function feeds the full hash result back into the entropy collector, we only return from it half the length of the hash, just as it was done before. This increases the construction's forward secrecy from 2^80 to a much more comfortable 2^128. c) Rather than using the raw "sha1_transform" function alone, we instead use the full proper BLAKE2s function, with finalization. This also has the advantage of supplying 16 bytes at a time rather than SHA1's 10 bytes, which, in addition to having a faster compression function to begin with, means faster extraction in general. On an Intel i7-11850H, this commit makes initial seeding around 131% faster. BLAKE2s itself has the nice property of internally being based on the ChaCha permutation, which the RNG is already using for expansion, so there shouldn't be any issue with newness, funkiness, or surprising CPU behavior, since it's based on something already in use. [1] https://eprint.iacr.org/2005/010.pdf [2] https://www.iacr.org/archive/crypto2005/36210017/36210017.pdf [3] https://eprint.iacr.org/2015/967.pdf [4] https://shattered.io/static/shattered.pdf [5] https://www.usenix.org/system/files/sec20-leurent.pdf Reviewed-by: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 37b962834b290444f1d879605042d5ece54e87ae Author: Sebastian Andrzej Siewior Date: Tue Dec 7 13:17:33 2021 +0100 random: remove unused irq_flags argument from add_interrupt_randomness() commit 703f7066f40599c290babdb79dd61319264987e9 upstream. Since commit ee3e00e9e7101 ("random: use registers from interrupted code for CPU's w/o a cycle counter") the irq_flags argument is no longer used. Remove unused irq_flags. Cc: Borislav Petkov Cc: Dave Hansen Cc: Dexuan Cui Cc: H. Peter Anvin Cc: Haiyang Zhang Cc: Ingo Molnar Cc: K. Y. Srinivasan Cc: Stephen Hemminger Cc: Thomas Gleixner Cc: Wei Liu Cc: linux-hyperv@vger.kernel.org Cc: x86@kernel.org Signed-off-by: Sebastian Andrzej Siewior Acked-by: Wei Liu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 5281a2bac10e9d8e334652b8af1ceb9c1b64e52e Author: Mark Brown Date: Wed Dec 1 17:44:49 2021 +0000 random: document add_hwgenerator_randomness() with other input functions commit 2b6c6e3d9ce3aa0e547ac25d60e06fe035cd9f79 upstream. The section at the top of random.c which documents the input functions available does not document add_hwgenerator_randomness() which might lead a reader to overlook it. Add a brief note about it. Signed-off-by: Mark Brown [Jason: reorganize position of function in doc comment and also document add_bootloader_randomness() while we're at it.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 8ade1d8adaebc1dd8064b981878e1f6cd0b796d2 Author: Eric Biggers Date: Wed Dec 23 00:09:57 2020 -0800 crypto: blake2s - adjust include guard naming commit 8786841bc2020f7f2513a6c74e64912f07b9c0dc upstream. Use the full path in the include guards for the BLAKE2s headers to avoid ambiguity and to match the convention for most files in include/crypto/. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 82fc363160a8f17f4da444d7cfd63508a957999c Author: Eric Biggers Date: Wed Dec 23 00:09:58 2020 -0800 crypto: blake2s - include instead of commit bbda6e0f1303953c855ee3669655a81b69fbe899 upstream. Address the following checkpatch warning: WARNING: Use #include instead of Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4f5add8764819aea40919e0d211ad171b1a514a3 Author: Jason A. Donenfeld Date: Tue Nov 30 13:43:15 2021 -0500 MAINTAINERS: co-maintain random.c commit 58e1100fdc5990b0cc0d4beaf2562a92e621ac7d upstream. random.c is a bit understaffed, and folks want more prompt reviews. I've got the crypto background and the interest to do these reviews, and have authored parts of the file already. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Signed-off-by: Jason A. Donenfeld Signed-off-by: Linus Torvalds Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 04cde9a57626a1964c0204cc3c8aefa4bc472d5b Author: Eric Biggers Date: Sun Mar 21 22:14:00 2021 -0700 random: remove dead code left over from blocking pool commit 118a4417e14348b2e46f5e467da8444ec4757a45 upstream. Remove some dead code that was left over following commit 90ea1c6436d2 ("random: remove the blocking pool"). Cc: linux-crypto@vger.kernel.org Cc: Andy Lutomirski Cc: Jann Horn Cc: Theodore Ts'o Reviewed-by: Andy Lutomirski Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 528333acd95b8f3888a41f0fae484e20529a10d4 Author: Ard Biesheuvel Date: Thu Nov 5 16:29:44 2020 +0100 random: avoid arch_get_random_seed_long() when collecting IRQ randomness commit 390596c9959c2a4f5b456df339f0604df3d55fe0 upstream. When reseeding the CRNG periodically, arch_get_random_seed_long() is called to obtain entropy from an architecture specific source if one is implemented. In most cases, these are special instructions, but in some cases, such as on ARM, we may want to back this using firmware calls, which are considerably more expensive. Another call to arch_get_random_seed_long() exists in the CRNG driver, in add_interrupt_randomness(), which collects entropy by capturing inter-interrupt timing and relying on interrupt jitter to provide random bits. This is done by keeping a per-CPU state, and mixing in the IRQ number, the cycle counter and the return address every time an interrupt is taken, and mixing this per-CPU state into the entropy pool every 64 invocations, or at least once per second. The entropy that is gathered this way is credited as 1 bit of entropy. Every time this happens, arch_get_random_seed_long() is invoked, and the result is mixed in as well, and also credited with 1 bit of entropy. This means that arch_get_random_seed_long() is called at least once per second on every CPU, which seems excessive, and doesn't really scale, especially in a virtualization scenario where CPUs may be oversubscribed: in cases where arch_get_random_seed_long() is backed by an instruction that actually goes back to a shared hardware entropy source (such as RNDRRS on ARM), we will end up hitting it hundreds of times per second. So let's drop the call to arch_get_random_seed_long() from add_interrupt_randomness(), and instead, rely on crng_reseed() to call the arch hook to get random seed material from the platform. Signed-off-by: Ard Biesheuvel Reviewed-by: Andre Przywara Tested-by: Andre Przywara Reviewed-by: Eric Biggers Acked-by: Marc Zyngier Reviewed-by: Jason A. Donenfeld Link: https://lore.kernel.org/r/20201105152944.16953-1-ardb@kernel.org Signed-off-by: Will Deacon Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 42c54fcc964dacedb9cf8eaf504760e50e9ea36e Author: Mark Rutland Date: Mon Feb 10 13:00:13 2020 +0000 random: add arch_get_random_*long_early() commit 253d3194c2b58152fe830fd27c2fd83ebc6fe5ee upstream. Some architectures (e.g. arm64) can have heterogeneous CPUs, and the boot CPU may be able to provide entropy while secondary CPUs cannot. On such systems, arch_get_random_long() and arch_get_random_seed_long() will fail unless support for RNG instructions has been detected on all CPUs. This prevents the boot CPU from being able to provide (potentially) trusted entropy when seeding the primary CRNG. To make it possible to seed the primary CRNG from the boot CPU without adversely affecting the runtime versions of arch_get_random_long() and arch_get_random_seed_long(), this patch adds new early versions of the functions used when initializing the primary CRNG. Default implementations are provided atop of the existing arch_get_random_long() and arch_get_random_seed_long() so that only architectures with such constraints need to provide the new helpers. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Mark Brown Cc: Theodore Ts'o Link: https://lore.kernel.org/r/20200210130015.17664-3-mark.rutland@arm.com Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit af98d2ae79f05a725bbd7a608eac4a4e1dbf8fb5 Author: Richard Henderson Date: Fri Jan 10 14:54:20 2020 +0000 powerpc: Use bool in archrandom.h commit 98dcfce69729f9ce0fb14f96a39bbdba21429597 upstream. The generic interface uses bool not int; match that. Reviewed-by: Ard Biesheuvel Signed-off-by: Richard Henderson Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20200110145422.49141-9-broonie@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 78b28324aac865d7ce21534675b2c4e8845a2594 Author: Richard Henderson Date: Fri Jan 10 14:54:18 2020 +0000 linux/random.h: Mark CONFIG_ARCH_RANDOM functions __must_check commit 904caa6413c87aacbf7d0682da617c39ca18cf1a upstream. We must not use the pointer output without validating the success of the random read. Reviewed-by: Ard Biesheuvel Signed-off-by: Richard Henderson Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20200110145422.49141-7-broonie@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2295356d23ce80503d050b40fa6ff919a6c42732 Author: Richard Henderson Date: Fri Jan 10 14:54:17 2020 +0000 linux/random.h: Use false with bool commit 66f5ae899ada79c0e9a3d8d954f93a72344cd350 upstream. Keep the generic fallback versions in sync with the other architecture specific implementations and use the proper name for false. Suggested-by: Ard Biesheuvel Signed-off-by: Richard Henderson Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20200110145422.49141-6-broonie@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit cbcd67f44e3746ff338fa34463d3c7fd2f985d20 Author: Richard Henderson Date: Fri Jan 10 14:54:16 2020 +0000 linux/random.h: Remove arch_has_random, arch_has_random_seed commit 647f50d5d9d933b644b29c54f13ac52af1b1774d upstream. The arm64 version of archrandom.h will need to be able to test for support and read the random number without preemption, so a separate query predicate is not practical. Since this part of the generic interface is unused, remove it. Signed-off-by: Richard Henderson Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20200110145422.49141-5-broonie@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3e5c6758b3017ea98c7494a472482d340193d5d1 Author: Richard Henderson Date: Fri Jan 10 14:54:14 2020 +0000 powerpc: Remove arch_has_random, arch_has_random_seed commit cbac004995a0ce8453bdc555fab579e2bdb842a6 upstream. These symbols are currently part of the generic archrandom.h interface, but are currently unused and can be removed. Signed-off-by: Richard Henderson Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20200110145422.49141-3-broonie@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2e266bef38946f29ba56592e4de8c7e419b02d1f Author: Richard Henderson Date: Fri Jan 10 14:54:13 2020 +0000 x86: Remove arch_has_random, arch_has_random_seed commit 5f2ed7f5b99b54389b74e53309677831ac9cb9d7 upstream. Use the expansion of these macros directly in arch_get_random_*. These symbols are currently part of the generic archrandom.h interface, but are currently unused and can be removed. Signed-off-by: Richard Henderson Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20200110145422.49141-2-broonie@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c3d17006acb0ec002dce91a9d0f0ad9ec2d14aec Author: Mark Rutland Date: Tue Mar 10 12:09:12 2020 +0000 random: avoid warnings for !CONFIG_NUMA builds commit ab9a7e27044b87ff2be47b8f8e095400e7fccc44 upstream. As crng_initialize_secondary() is only called by do_numa_crng_init(), and the latter is under ifdeffery for CONFIG_NUMA, when CONFIG_NUMA is not selected the compiler will warn that the former is unused: | drivers/char/random.c:820:13: warning: 'crng_initialize_secondary' defined but not used [-Wunused-function] | 820 | static void crng_initialize_secondary(struct crng_state *crng) | | ^~~~~~~~~~~~~~~~~~~~~~~~~ Stephen reports that this happens for x86_64 noallconfig builds. We could move crng_initialize_secondary() and crng_init_try_arch() under the CONFIG_NUMA ifdeffery, but this has the unfortunate property of separating them from crng_initialize_primary() and crng_init_try_arch_early() respectively. Instead, let's mark crng_initialize_secondary() as __maybe_unused. Link: https://lore.kernel.org/r/20200310121747.GA49602@lakrids.cambridge.arm.com Fixes: 5cbe0f13b51a ("random: split primary/secondary crng init paths") Reported-by: Stephen Rothwell Signed-off-by: Mark Rutland Cc: Theodore Ts'o Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3c2691868d499a8023d2ae56fc60496c806350de Author: Mark Rutland Date: Mon Feb 10 13:00:12 2020 +0000 random: split primary/secondary crng init paths commit 5cbe0f13b51ac2fb2fd55902cff8d0077fc084c0 upstream. Currently crng_initialize() is used for both the primary CRNG and secondary CRNGs. While we wish to share common logic, we need to do a number of additional things for the primary CRNG, and this would be easier to deal with were these handled in separate functions. This patch splits crng_initialize() into crng_initialize_primary() and crng_initialize_secondary(), with common logic factored out into a crng_init_try_arch() helper. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Mark Brown Cc: Theodore Ts'o Link: https://lore.kernel.org/r/20200210130015.17664-2-mark.rutland@arm.com Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e13ea48b984d49863057df2ac8afd79b64a4fea8 Author: Yangtao Li Date: Tue Jan 7 16:56:11 2020 -0500 random: remove some dead code of poolinfo commit 09a6d00a42ce0e63e2a15be3d070974bcc656ec7 upstream. Since it is not being used, so delete it. Signed-off-by: Yangtao Li Link: https://lore.kernel.org/r/20190607182517.28266-5-tiny.windzz@gmail.com Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 221e43c84b0257738966f8ff2630648728e77998 Author: Yangtao Li Date: Tue Jan 7 16:55:34 2020 -0500 random: fix typo in add_timer_randomness() commit 727d499a6f4f29b6abdb635032f5e53e5905aedb upstream. s/entimate/estimate Signed-off-by: Yangtao Li Link: https://lore.kernel.org/r/20190607182517.28266-4-tiny.windzz@gmail.com Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a30bf3c41fca918e62e8ff24972a3ad918acd83e Author: Yangtao Li Date: Fri Jun 7 14:25:15 2019 -0400 random: Add and use pr_fmt() commit 12cd53aff5ea0359b1dac91fcd9ddc7b9e646588 upstream. Prefix all printk/pr_ messages with "random: " to make the logging a bit more consistent. Miscellanea: o Convert a printks to pr_notice o Whitespace to align to open parentheses o Remove embedded "random: " from pr_* as pr_fmt adds it Signed-off-by: Yangtao Li Link: https://lore.kernel.org/r/20190607182517.28266-3-tiny.windzz@gmail.com Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 6eaeae8da5b2fe811f2b9f0552e55dbcbd613fb0 Author: Yangtao Li Date: Fri Jun 7 14:25:14 2019 -0400 random: convert to ENTROPY_BITS for better code readability commit 12faac30d157970fdbfa171bbeb1fb88350303b1 upstream. Signed-off-by: Yangtao Li Link: https://lore.kernel.org/r/20190607182517.28266-2-tiny.windzz@gmail.com Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9587bbd9c078cbdafb24f289f4fb0cd1911cd155 Author: Yangtao Li Date: Tue Jan 7 16:10:28 2020 -0500 random: remove unnecessary unlikely() commit 870e05b1b18814911cb2703a977f447cb974f0f9 upstream. WARN_ON() already contains an unlikely(), so it's not necessary to use unlikely. Signed-off-by: Yangtao Li Link: https://lore.kernel.org/r/20190607182517.28266-1-tiny.windzz@gmail.com Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2de0a1e2c82f19769b1c4d8d0eab6584e30255c2 Author: Andy Lutomirski Date: Mon Dec 23 00:20:51 2019 -0800 random: remove kernel.random.read_wakeup_threshold commit c95ea0c69ffda19381c116db2be23c7e654dac98 upstream. It has no effect any more, so remove it. We can revert this if there is some user code that expects to be able to set this sysctl. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/a74ed2cf0b5a5451428a246a9239f5bc4e29358f.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 892d340ee23544c72717d05283049c4d3276693e Author: Andy Lutomirski Date: Mon Dec 23 00:20:50 2019 -0800 random: delete code to pull data into pools commit 84df7cdfbb215a34657b39f4257dab739efa2df9 upstream. There is no pool that pulls, so it was just dead code. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/4a05fe0c7a5c831389ef4aea51d24528ac8682c7.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 57908fb8c1f5aed214571d16d5f1b4a3395f0183 Author: Andy Lutomirski Date: Mon Dec 23 00:20:49 2019 -0800 random: remove the blocking pool commit 90ea1c6436d26e62496616fb5891e00819ff4849 upstream. There is no longer any interface to read data from the blocking pool, so remove it. This enables quite a bit of code deletion, much of which will be done in subsequent patches. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/511225a224bf0a291149d3c0b8b45393cd03ab96.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d776934a0cdc61236d2bac9496f39dfddccaac3a Author: Dominik Brodowski Date: Wed Dec 29 22:10:03 2021 +0100 random: fix crash on multiple early calls to add_bootloader_randomness() commit f7e67b8e803185d0aabe7f29d25a35c8be724a78 upstream. Currently, if CONFIG_RANDOM_TRUST_BOOTLOADER is enabled, multiple calls to add_bootloader_randomness() are broken and can cause a NULL pointer dereference, as noted by Ivan T. Ivanov. This is not only a hypothetical problem, as qemu on arm64 may provide bootloader entropy via EFI and via devicetree. On the first call to add_hwgenerator_randomness(), crng_fast_load() is executed, and if the seed is long enough, crng_init will be set to 1. On subsequent calls to add_bootloader_randomness() and then to add_hwgenerator_randomness(), crng_fast_load() will be skipped. Instead, wait_event_interruptible() and then credit_entropy_bits() will be called. If the entropy count for that second seed is large enough, that proceeds to crng_reseed(). However, both wait_event_interruptible() and crng_reseed() depends (at least in numa_crng_init()) on workqueues. Therefore, test whether system_wq is already initialized, which is a sufficient indicator that workqueue_init_early() has progressed far enough. If we wind up hitting the !system_wq case, we later want to do what would have been done there when wqs are up, so set a flag, and do that work later from the rand_initialize() call. Reported-by: Ivan T. Ivanov Fixes: 18b915ac6b0a ("efi/random: Treat EFI_RNG_PROTOCOL output as bootloader randomness") Cc: stable@vger.kernel.org Signed-off-by: Dominik Brodowski [Jason: added crng_need_done state and related logic.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 6a54da4f7e9ee0e863c7a1c2eab67e61aca10098 Author: Andy Lutomirski Date: Mon Dec 23 00:20:48 2019 -0800 random: make /dev/random be almost like /dev/urandom commit 30c08efec8884fb106b8e57094baa51bb4c44e32 upstream. This patch changes the read semantics of /dev/random to be the same as /dev/urandom except that reads will block until the CRNG is ready. None of the cleanups that this enables have been done yet. As a result, this gives a warning about an unused function. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/5e6ac8831c6cf2e56a7a4b39616d1732b2bdd06c.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e962a3ae799618cba71d5e5eaebf7727159cea4a Author: Andy Lutomirski Date: Mon Dec 23 00:20:47 2019 -0800 random: ignore GRND_RANDOM in getentropy(2) commit 48446f198f9adcb499b30332488dfd5bc3f176f6 upstream. The separate blocking pool is going away. Start by ignoring GRND_RANDOM in getentropy(2). This should not materially break any API. Any code that worked without this change should work at least as well with this change. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/705c5a091b63cc5da70c99304bb97e0109be0a26.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 82c1e117cabe36ffe91980fb93e01f92038ad7c4 Author: Andy Lutomirski Date: Mon Dec 23 00:20:46 2019 -0800 random: add GRND_INSECURE to return best-effort non-cryptographic bytes commit 75551dbf112c992bc6c99a972990b3f272247e23 upstream. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/d5473b56cf1fa900ca4bd2b3fc1e5b8874399919.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a7b2d8f6e6e350f3ecbee2cbe7c143726b211a68 Author: Andy Lutomirski Date: Mon Dec 23 00:20:45 2019 -0800 random: Add a urandom_read_nowait() for random APIs that don't warn commit c6f1deb158789abba02a7eba600747843eeb3a57 upstream. /dev/random and getrandom() never warn. Split the meat of urandom_read() into urandom_read_nowarn() and leave the warning code in urandom_read(). This has no effect on kernel behavior, but it makes subsequent patches more straightforward. It also makes the fact that getrandom() never warns more obvious. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/c87ab200588de746431d9f916501ef11e5242b13.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ab956b5be99a97474334c62dc15470d205c5f563 Author: Andy Lutomirski Date: Mon Dec 23 00:20:44 2019 -0800 random: Don't wake crng_init_wait when crng_init == 1 commit 4c8d062186d9923c09488716b2fb1b829b5b8006 upstream. crng_init_wait is only used to wayt for crng_init to be set to 2, so there's no point to waking it when crng_init is set to 1. Remove the unnecessary wake_up_interruptible() call. Signed-off-by: Andy Lutomirski Link: https://lore.kernel.org/r/6fbc0bfcbfc1fa2c76fd574f5b6f552b11be7fde.1577088521.git.luto@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 79dd56c9fe2b4ed5112b7f34a5e79937323f035e Author: Jason A. Donenfeld Date: Tue Jan 11 18:58:43 2022 +0100 lib/crypto: sha1: re-roll loops to reduce code size commit 9a1536b093bb5bf60689021275fd24d513bb8db0 upstream. With SHA-1 no longer being used for anything performance oriented, and also soon to be phased out entirely, we can make up for the space added by unrolled BLAKE2s by simply re-rolling SHA-1. Since SHA-1 is so much more complex, re-rolling it more or less takes care of the code size added by BLAKE2s. And eventually, hopefully we'll see SHA-1 removed entirely from most small kernel builds. Cc: Herbert Xu Cc: Ard Biesheuvel Tested-by: Geert Uytterhoeven Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 1815bfce3e21f6cf2bca1a49e6c60a135119044c Author: Jason A. Donenfeld Date: Tue Jan 11 14:37:41 2022 +0100 lib/crypto: blake2s: move hmac construction into wireguard commit d8d83d8ab0a453e17e68b3a3bed1f940c34b8646 upstream. Basically nobody should use blake2s in an HMAC construction; it already has a keyed variant. But unfortunately for historical reasons, Noise, used by WireGuard, uses HKDF quite strictly, which means we have to use this. Because this really shouldn't be used by others, this commit moves it into wireguard's noise.c locally, so that kernels that aren't using WireGuard don't get this superfluous code baked in. On m68k systems, this shaves off ~314 bytes. Cc: Herbert Xu Tested-by: Geert Uytterhoeven Acked-by: Ard Biesheuvel [Jason: for stable, skip the wireguard changes, since this kernel doesn't have wireguard.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 38ec02a401b05516d9bc0c7254850b26d2509f06 Author: Jason A. Donenfeld Date: Fri Nov 8 13:22:28 2019 +0100 crypto: blake2s - generic C library implementation and selftest commit 66d7fb94e4ffe5acc589e0b2b4710aecc1f07a28 upstream. The C implementation was originally based on Samuel Neves' public domain reference implementation but has since been heavily modified for the kernel. We're able to do compile-time optimizations by moving some scaffolding around the final function into the header file. Information: https://blake2.net/ Signed-off-by: Jason A. Donenfeld Signed-off-by: Samuel Neves Co-developed-by: Samuel Neves [ardb: - move from lib/zinc to lib/crypto - remove simd handling - rewrote selftest for better coverage - use fixed digest length for blake2s_hmac() and rename to blake2s256_hmac() ] Signed-off-by: Ard Biesheuvel Signed-off-by: Herbert Xu [Jason: for stable, skip kconfig and wire up directly, and skip the arch hooks; optimized implementations need not be backported.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 365af44f3ea60ad75a99a80fca903db1f2d9959f Author: Andy Shevchenko Date: Wed Mar 21 19:01:40 2018 +0200 crypto: Deduplicate le32_to_cpu_array() and cpu_to_le32_array() commit 9def051018c08e65c532822749e857eb4b2e12e7 upstream. Deduplicate le32_to_cpu_array() and cpu_to_le32_array() by moving them to the generic header. No functional change implied. Signed-off-by: Andy Shevchenko Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 67108947e0bb1b8a088be97ba2939fcf6bd84e14 Author: Herbert Xu Date: Sun Nov 17 08:48:17 2019 +0800 Revert "hwrng: core - Freeze khwrng thread during suspend" commit 08e97aec700aeff54c4847f170e566cbd7e14e81 upstream. This reverts commit 03a3bb7ae631 ("hwrng: core - Freeze khwrng thread during suspend"), ff296293b353 ("random: Support freezable kthreads in add_hwgenerator_randomness()") and 59b569480dc8 ("random: Use wait_event_freezable() in add_hwgenerator_randomness()"). These patches introduced regressions and we need more time to get them ready for mainline. Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3209e130b81e18074ab0eb23d237fd4f9a961d69 Author: Borislav Petkov Date: Tue Oct 1 19:50:23 2019 +0200 char/random: Add a newline at the end of the file commit 3fd57e7a9e66b9a8bcbf0560ff09e84d0b8de1bd upstream. On Tue, Oct 01, 2019 at 10:14:40AM -0700, Linus Torvalds wrote: > The previous state of the file didn't have that 0xa at the end, so you get that > > > -EXPORT_SYMBOL_GPL(add_bootloader_randomness); > \ No newline at end of file > +EXPORT_SYMBOL_GPL(add_bootloader_randomness); > > which is "the '-' line doesn't have a newline, the '+' line does" marker. Aaha, that makes total sense, thanks for explaining. Oh well, let's fix it then so that people don't scratch heads like me. Signed-off-by: Linus Torvalds Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9f00f56590b95c7b75f725f4ec43293c91da9ad9 Author: Stephen Boyd Date: Thu Sep 5 09:41:12 2019 -0700 random: Use wait_event_freezable() in add_hwgenerator_randomness() commit 59b569480dc8bb9dce57cdff133853a842dfd805 upstream. Sebastian reports that after commit ff296293b353 ("random: Support freezable kthreads in add_hwgenerator_randomness()") we can call might_sleep() when the task state is TASK_INTERRUPTIBLE (state=1). This leads to the following warning. do not call blocking ops when !TASK_RUNNING; state=1 set at [<00000000349d1489>] prepare_to_wait_event+0x5a/0x180 WARNING: CPU: 0 PID: 828 at kernel/sched/core.c:6741 __might_sleep+0x6f/0x80 Modules linked in: CPU: 0 PID: 828 Comm: hwrng Not tainted 5.3.0-rc7-next-20190903+ #46 RIP: 0010:__might_sleep+0x6f/0x80 Call Trace: kthread_freezable_should_stop+0x1b/0x60 add_hwgenerator_randomness+0xdd/0x130 hwrng_fillfn+0xbf/0x120 kthread+0x10c/0x140 ret_from_fork+0x27/0x50 We shouldn't call kthread_freezable_should_stop() from deep within the wait_event code because the task state is still set as TASK_INTERRUPTIBLE instead of TASK_RUNNING and kthread_freezable_should_stop() will try to call into the freezer with the task in the wrong state. Use wait_event_freezable() instead so that it calls schedule() in the right place and tries to enter the freezer when the task state is TASK_RUNNING instead. Reported-by: Sebastian Andrzej Siewior Tested-by: Sebastian Andrzej Siewior Cc: Keerthy Fixes: ff296293b353 ("random: Support freezable kthreads in add_hwgenerator_randomness()") Signed-off-by: Stephen Boyd Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c288318b4c042b55c242d90a02d7ae35e03d47ce Author: Hsin-Yi Wang Date: Fri Aug 23 14:24:51 2019 +0800 fdt: add support for rng-seed commit 428826f5358c922dc378830a1717b682c0823160 upstream. Introducing a chosen node, rng-seed, which is an entropy that can be passed to kernel called very early to increase initial device randomness. Bootloader should provide this entropy and the value is read from /chosen/rng-seed in DT. Obtain of_fdt_crc32 for CRC check after early_init_dt_scan_nodes(), since early_init_dt_scan_chosen() would modify fdt to erase rng-seed. Add a new interface add_bootloader_randomness() for rng-seed use case. Depends on whether the seed is trustworthy, rng seed would be passed to add_hwgenerator_randomness(). Otherwise it would be passed to add_device_randomness(). Decision is controlled by kernel config RANDOM_TRUST_BOOTLOADER. Signed-off-by: Hsin-Yi Wang Reviewed-by: Stephen Boyd Reviewed-by: Rob Herring Reviewed-by: Theodore Ts'o # drivers/char/random.c Signed-off-by: Will Deacon Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 982df0717c06421f83c7818c975fb1928b9158d2 Author: Stephen Boyd Date: Mon Aug 19 08:02:45 2019 -0700 random: Support freezable kthreads in add_hwgenerator_randomness() commit ff296293b3538d19278a7f7cd1f3aa600ad9164c upstream. The kthread calling this function is freezable after commit 03a3bb7ae631 ("hwrng: core - Freeze khwrng thread during suspend") is applied. Unfortunately, this function uses wait_event_interruptible() but doesn't check for the kthread being woken up by the fake freezer signal. When a user suspends the system, this kthread will wake up and if it fails the entropy size check it will immediately go back to sleep and not go into the freezer. Eventually, suspend will fail because the task never froze and a warning message like this may appear: PM: suspend entry (deep) Filesystems sync: 0.000 seconds Freezing user space processes ... (elapsed 0.001 seconds) done. OOM killer disabled. Freezing remaining freezable tasks ... Freezing of tasks failed after 20.003 seconds (1 tasks refusing to freeze, wq_busy=0): hwrng R running task 0 289 2 0x00000020 [] (__schedule) from [] (schedule+0x3c/0xc0) [] (schedule) from [] (add_hwgenerator_randomness+0xb0/0x100) [] (add_hwgenerator_randomness) from [] (hwrng_fillfn+0xc0/0x14c [rng_core]) [] (hwrng_fillfn [rng_core]) from [] (kthread+0x134/0x148) [] (kthread) from [] (ret_from_fork+0x14/0x2c) Check for a freezer signal here and skip adding any randomness if the task wakes up because it was frozen. This should make the kthread freeze properly and suspend work again. Fixes: 03a3bb7ae631 ("hwrng: core - Freeze khwrng thread during suspend") Reported-by: Keerthy Tested-by: Keerthy Signed-off-by: Stephen Boyd Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 5ca70da062c59aa31545b11a1ea1cf1b90ebed53 Author: Theodore Ts'o Date: Wed May 22 12:02:16 2019 -0400 random: fix soft lockup when trying to read from an uninitialized blocking pool commit 58be0106c5306b939b07b4b8bf00669a20593f4b upstream. Fixes: eb9d1bf079bb: "random: only read from /dev/random after its pool has received 128 bits" Reported-by: kernel test robot Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a4fb822be21931fe3ec9c90e50080b495aea8f0e Author: Vasily Gorbik Date: Tue May 7 16:28:15 2019 +0200 latent_entropy: avoid build error when plugin cflags are not set commit 7e756f423af808b6571fed3144747db2ef7fa1c5 upstream. Some architectures set up CFLAGS for linux decompressor phase from scratch and do not include GCC_PLUGINS_CFLAGS. Since "latent_entropy" variable declaration is generated by the plugin code itself including linux/random.h in decompressor code then would cause a build error. E.g. on s390: In file included from ./include/linux/net.h:22, from ./include/linux/skbuff.h:29, from ./include/linux/if_ether.h:23, from ./arch/s390/include/asm/diag.h:12, from arch/s390/boot/startup.c:8: ./include/linux/random.h: In function 'add_latent_entropy': ./include/linux/random.h:26:39: error: 'latent_entropy' undeclared (first use in this function); did you mean 'add_latent_entropy'? 26 | add_device_randomness((const void *)&latent_entropy, | ^~~~~~~~~~~~~~ | add_latent_entropy ./include/linux/random.h:26:39: note: each undeclared identifier is reported only once for each function it appears in The build error is triggered by commit a80313ff91ab ("s390/kernel: introduce .dma sections") which made it into 5.2 merge window. To address that avoid using CONFIG_GCC_PLUGIN_LATENT_ENTROPY in favour of LATENT_ENTROPY_PLUGIN definition which is defined as a part of gcc plugins cflags and hence reflect more accurately when gcc plugin is active. Besides that it is also used for similar purpose in linux/compiler-gcc.h for latent_entropy attribute definition. Signed-off-by: Vasily Gorbik Acked-by: Kees Cook Signed-off-by: Martin Schwidefsky Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d54abb49aef6b158673bfcddf874aee469163b1f Author: George Spelvin Date: Fri Apr 19 23:48:20 2019 -0400 random: document get_random_int() family commit 92e507d216139b356a375afbda2824e85235e748 upstream. Explain what these functions are for and when they offer an advantage over get_random_bytes(). (We still need documentation on rng_is_initialized(), the random_ready_callback system, and early boot in general.) Signed-off-by: George Spelvin Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 166a592cad369dcbe942c26bdfdb313f09b1018d Author: Kees Cook Date: Fri Apr 19 23:27:05 2019 -0400 random: move rand_initialize() earlier commit d55535232c3dbde9a523a9d10d68670f5fe5dec3 upstream. Right now rand_initialize() is run as an early_initcall(), but it only depends on timekeeping_init() (for mixing ktime_get_real() into the pools). However, the call to boot_init_stack_canary() for stack canary initialization runs earlier, which triggers a warning at boot: random: get_random_bytes called from start_kernel+0x357/0x548 with crng_init=0 Instead, this moves rand_initialize() to after timekeeping_init(), and moves canary initialization here as well. Note that this warning may still remain for machines that do not have UEFI RNG support (which initializes the RNG pools during setup_arch()), or for x86 machines without RDRAND (or booting without "random.trust=on" or CONFIG_RANDOM_TRUST_CPU=y). Signed-off-by: Kees Cook Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 79246ba8065f420b0eec3cc3afc8d55a5e09c9d0 Author: Theodore Ts'o Date: Wed Feb 20 16:06:38 2019 -0500 random: only read from /dev/random after its pool has received 128 bits commit eb9d1bf079bb438d1a066d72337092935fc770f6 upstream. Immediately after boot, we allow reads from /dev/random before its entropy pool has been fully initialized. Fix this so that we don't allow this until the blocking pool has received 128 bits. We do this by repurposing the initialized flag in the entropy pool struct, and use the initialized flag in the blocking pool to indicate whether it is safe to pull from the blocking pool. To do this, we needed to rework when we decide to push entropy from the input pool to the blocking pool, since the initialized flag for the input pool was used for this purpose. To simplify things, we no longer use the initialized flag for that purpose, nor do we use the entropy_total field any more. Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 78b1bfe3e42ace5ee55fe71f998a3459d736bdf4 Author: Rasmus Villemoes Date: Fri Nov 2 12:04:47 2018 +0100 drivers/char/random.c: make primary_crng static commit 764ed189c82090c1d85f0e30636156736d8f09a8 upstream. Since the definition of struct crng_state is private to random.c, and primary_crng is neither declared or used elsewhere, there's no reason for that symbol to have external linkage. Signed-off-by: Rasmus Villemoes Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit db8aa1a25fca1299ed1810ac2ad2efe573fdcb56 Author: Rasmus Villemoes Date: Fri Nov 2 12:04:46 2018 +0100 drivers/char/random.c: remove unused stuct poolinfo::poolbits commit 3bd0b5bf7dc3ea02070fcbcd682ecf628269e8ef upstream. This field is never used, might as well remove it. Signed-off-by: Rasmus Villemoes Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 47811637e492b79106940b2765ebbd10dc270bb2 Author: Rasmus Villemoes Date: Fri Nov 2 12:04:45 2018 +0100 drivers/char/random.c: constify poolinfo_table commit 26e0854ab3310bbeef1ed404a2c87132fc91f8e1 upstream. Never modified, might as well be put in .rodata. Signed-off-by: Rasmus Villemoes Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4bba4e8f42aa2efd0168273d7c5bbfcfd8d229f5 Author: Kees Cook Date: Mon Aug 27 14:51:54 2018 -0700 random: make CPU trust a boot parameter commit 9b25436662d5fb4c66eb527ead53cab15f596ee0 upstream. Instead of forcing a distro or other system builder to choose at build time whether the CPU is trusted for CRNG seeding via CONFIG_RANDOM_TRUST_CPU, provide a boot-time parameter for end users to control the choice. The CONFIG will set the default state instead. Signed-off-by: Kees Cook Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9bb501015bcc83b0eead30044bf20b356b134426 Author: Jason A. Donenfeld Date: Tue Jul 31 21:11:00 2018 +0200 random: Make crng state queryable commit 9a47249d444d344051c7c0e909fad0e88515a5c2 upstream. It is very useful to be able to know whether or not get_random_bytes_wait / wait_for_random_bytes is going to block or not, or whether plain get_random_bytes is going to return good randomness or bad randomness. The particular use case is for mitigating certain attacks in WireGuard. A handshake packet arrives and is queued up. Elsewhere a worker thread takes items from the queue and processes them. In replying to these items, it needs to use some random data, and it has to be good random data. If we simply block until we can have good randomness, then it's possible for an attacker to fill the queue up with packets waiting to be processed. Upon realizing the queue is full, WireGuard will detect that it's under a denial of service attack, and behave accordingly. A better approach is just to drop incoming handshake packets if the crng is not yet initialized. This patch, therefore, makes that information directly accessible. Signed-off-by: Jason A. Donenfeld Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a5471125eea5513520c9cf0e313bcbb7a0f0a36e Author: Ingo Molnar Date: Sun Jul 22 10:51:50 2018 -0400 random: remove preempt disabled region commit b34fbaa9289328c7aec67d2b8b8b7d02bc61c67d upstream. No need to keep preemption disabled across the whole function. mix_pool_bytes() uses a spin_lock() to protect the pool and there are other places like write_pool() whhich invoke mix_pool_bytes() without disabling preemption. credit_entropy_bits() is invoked from other places like add_hwgenerator_randomness() without disabling preemption. Before commit 95b709b6be49 ("random: drop trickle mode") the function used __this_cpu_inc_return() which would require disabled preemption. The preempt_disable() section was added in commit 43d5d3018c37 ("[PATCH] random driver preempt robustness", history tree). It was claimed that the code relied on "vt_ioctl() being called under BKL". Cc: "Theodore Ts'o" Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner [bigeasy: enhance the commit message] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 933dd2f9aa80cb94b1267a5d9627cadc7f52f02b Author: Theodore Ts'o Date: Tue Jul 17 18:24:27 2018 -0400 random: add a config option to trust the CPU's hwrng commit 39a8883a2b989d1d21bd8dd99f5557f0c5e89694 upstream. This gives the user building their own kernel (or a Linux distribution) the option of deciding whether or not to trust the CPU's hardware random number generator (e.g., RDRAND for x86 CPU's) as being correctly implemented and not having a back door introduced (perhaps courtesy of a Nation State's law enforcement or intelligence agencies). This will prevent getrandom(2) from blocking, if there is a willingness to trust the CPU manufacturer. Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c184f7c0013a5a0922f17a7f80831c640f81a56a Author: Tobin C. Harding Date: Fri Jun 22 09:15:32 2018 +1000 random: Return nbytes filled from hw RNG commit 753d433b586d1d43c487e3d660f5778c7c8d58ea upstream. Currently the function get_random_bytes_arch() has return value 'void'. If the hw RNG fails we currently fall back to using get_random_bytes(). This defeats the purpose of requesting random material from the hw RNG in the first place. There are currently no intree users of get_random_bytes_arch(). Only get random bytes from the hw RNG, make function return the number of bytes retrieved from the hw RNG. Acked-by: Theodore Ts'o Reviewed-by: Steven Rostedt (VMware) Signed-off-by: Tobin C. Harding Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4b9c6116c295c8a9034c88b35edea751b2cec364 Author: Tobin C. Harding Date: Fri Jun 22 09:15:31 2018 +1000 random: Fix whitespace pre random-bytes work commit 8ddd6efa56c3fe23df9fe4cf5e2b49cc55416ef4 upstream. There are a couple of whitespace issues around the function get_random_bytes_arch(). In preparation for patching this function let's clean them up. Acked-by: Theodore Ts'o Signed-off-by: Tobin C. Harding Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7f0edf190598f05470f8de9eb0b9452f65b3e6d3 Author: Rasmus Villemoes Date: Thu Mar 1 00:22:47 2018 +0100 drivers/char/random.c: remove unused dont_count_entropy commit 5e747dd9be54be190dd6ebeebf4a4a01ba765625 upstream. Ever since "random: kill dead extract_state struct" [1], the dont_count_entropy member of struct timer_rand_state has been effectively unused. Since it hasn't found a new use in 12 years, it's probably safe to finally kill it. [1] Pre-git, https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git/commit/?id=c1c48e61c251f57e7a3f1bf11b3c462b2de9dcb5 Signed-off-by: Rasmus Villemoes Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 5f3167f9438660896456c907e4c8468db0407a6e Author: Andi Kleen Date: Wed Feb 28 13:43:28 2018 -0800 random: optimize add_interrupt_randomness commit e8e8a2e47db6bb85bb0cb21e77b5c6aaedf864b4 upstream. add_interrupt_randomess always wakes up code blocking on /dev/random. This wake up is done unconditionally. Unfortunately this means all interrupts take the wait queue spinlock, which can be rather expensive on large systems processing lots of interrupts. We saw 1% cpu time spinning on this on a large macro workload running on a large system. I believe it's a recent regression (?) Always check if there is a waiter on the wait queue before waking up. This check can be done without taking a spinlock. 1.06% 10460 [kernel.vmlinux] [k] native_queued_spin_lock_slowpath | ---native_queued_spin_lock_slowpath | --0.57%--_raw_spin_lock_irqsave | --0.56%--__wake_up_common_lock credit_entropy_bits add_interrupt_randomness handle_irq_event_percpu handle_irq_event handle_edge_irq handle_irq do_IRQ common_interrupt Signed-off-by: Andi Kleen Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 1e8f4f59a0b194b3d64f5537139fb880332d0916 Author: Jason A. Donenfeld Date: Sun Feb 4 23:07:46 2018 +0100 random: always fill buffer in get_random_bytes_wait commit 25e3fca492035a2e1d4ac6e3b1edd9c1acd48897 upstream. In the unfortunate event that a developer fails to check the return value of get_random_bytes_wait, or simply wants to make a "best effort" attempt, for whatever that's worth, it's much better to still fill the buffer with _something_ rather than catastrophically failing in the case of an interruption. This is both a defense in depth measure against inevitable programming bugs, as well as a means of making the API a bit more useful. Signed-off-by: Jason A. Donenfeld Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 63c60b6a3e33ec131326d2318e310cf0992e480c Author: Eric Biggers Date: Wed Nov 22 11:51:39 2017 -0800 crypto: chacha20 - Fix keystream alignment for chacha20_block() commit 9f480faec58cd6197a007ea1dcac6b7c3daf1139 upstream. When chacha20_block() outputs the keystream block, it uses 'u32' stores directly. However, the callers (crypto/chacha20_generic.c and drivers/char/random.c) declare the keystream buffer as a 'u8' array, which is not guaranteed to have the needed alignment. Fix it by having both callers declare the keystream as a 'u32' array. For now this is preferable to switching over to the unaligned access macros because chacha20_block() is only being used in cases where we can easily control the alignment (stack buffers). Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 824d2a0f357b42ca8ea437a1b8a1304f0b3f061d Author: Eric Biggers Date: Mon Dec 20 16:41:56 2021 -0600 random: fix data race on crng_node_pool commit 5d73d1e320c3fd94ea15ba5f79301da9a8bcc7de upstream. extract_crng() and crng_backtrack_protect() load crng_node_pool with a plain load, which causes undefined behavior if do_numa_crng_init() modifies it concurrently. Fix this by using READ_ONCE(). Note: as per the previous discussion https://lore.kernel.org/lkml/20211219025139.31085-1-ebiggers@kernel.org/T/#u, READ_ONCE() is believed to be sufficient here, and it was requested that it be used here instead of smp_load_acquire(). Also change do_numa_crng_init() to set crng_node_pool using cmpxchg_release() instead of mb() + cmpxchg(), as the former is sufficient here but is more lightweight. Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly userspace programs") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers Acked-by: Paul E. McKenney Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 86edede2cd02caa24c10c1a4f2325b610cef9573 Author: Jason A. Donenfeld Date: Fri Feb 21 21:10:37 2020 +0100 random: always use batched entropy for get_random_u{32,64} commit 69efea712f5b0489e67d07565aad5c94e09a3e52 upstream. It turns out that RDRAND is pretty slow. Comparing these two constructions: for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret)) arch_get_random_long(&ret); and long buf[CHACHA_BLOCK_SIZE / sizeof(long)]; extract_crng((u8 *)buf); it amortizes out to 352 cycles per long for the top one and 107 cycles per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H. And importantly, the top one has the drawback of not benefiting from the real rng, whereas the bottom one has all the nice benefits of using our own chacha rng. As get_random_u{32,64} gets used in more places (perhaps beyond what it was originally intended for when it was introduced as get_random_{int,long} back in the md5 monstrosity era), it seems like it might be a good thing to strengthen its posture a tiny bit. Doing this should only be stronger and not any weaker because that pool is already initialized with a bunch of rdrand data (when available). This way, we get the benefits of the hardware rng as well as our own rng. Another benefit of this is that we no longer hit pitfalls of the recent stream of AMD bugs in RDRAND. One often used code pattern for various things is: do { val = get_random_u32(); } while (hash_table_contains_key(val)); That recent AMD bug rendered that pattern useless, whereas we're really very certain that chacha20 output will give pretty distributed numbers, no matter what. So, this simplification seems better both from a security perspective and from a performance perspective. Signed-off-by: Jason A. Donenfeld Reviewed-by: Greg Kroah-Hartman Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit d118cad9f872fc08746c8c3a4e1e8a377d603431 Author: Greg Kroah-Hartman Date: Thu Mar 5 15:48:03 2020 +0100 Revert "char/random: silence a lockdep splat with printk()" This reverts commit 28820c5802f9f83c655ab09ccae8289103ce1490 which is commit 1b710b1b10eff9d46666064ea25f079f70bc67a8 upstream. It causes problems here just like it did in 4.19.y and odds are it will be reverted upstream as well. Reported-by: Guenter Roeck Cc: Sergey Senozhatsky Cc: Qian Cai Cc: Theodore Ts'o Cc: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit e52d5836b7a7bf44a995902597a87b3dd2aeec3c Author: Sergey Senozhatsky Date: Wed Nov 13 16:16:25 2019 -0500 char/random: silence a lockdep splat with printk() [ Upstream commit 1b710b1b10eff9d46666064ea25f079f70bc67a8 ] Sergey didn't like the locking order, uart_port->lock -> tty_port->lock uart_write (uart_port->lock) __uart_start pl011_start_tx pl011_tx_chars uart_write_wakeup tty_port_tty_wakeup tty_port_default tty_port_tty_get (tty_port->lock) but those code is so old, and I have no clue how to de-couple it after checking other locks in the splat. There is an onging effort to make all printk() as deferred, so until that happens, workaround it for now as a short-term fix. LTP: starting iogen01 (export LTPROOT; rwtest -N iogen01 -i 120s -s read,write -Da -Dv -n 2 500b:$TMPDIR/doio.f1.$$ 1000b:$TMPDIR/doio.f2.$$) WARNING: possible circular locking dependency detected ------------------------------------------------------ doio/49441 is trying to acquire lock: ffff008b7cff7290 (&(&zone->lock)->rlock){..-.}, at: rmqueue+0x138/0x2050 but task is already holding lock: 60ff000822352818 (&pool->lock/1){-.-.}, at: start_flush_work+0xd8/0x3f0 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (&pool->lock/1){-.-.}: lock_acquire+0x320/0x360 _raw_spin_lock+0x64/0x80 __queue_work+0x4b4/0xa10 queue_work_on+0xac/0x11c tty_schedule_flip+0x84/0xbc tty_flip_buffer_push+0x1c/0x28 pty_write+0x98/0xd0 n_tty_write+0x450/0x60c tty_write+0x338/0x474 __vfs_write+0x88/0x214 vfs_write+0x12c/0x1a4 redirected_tty_write+0x90/0xdc do_loop_readv_writev+0x140/0x180 do_iter_write+0xe0/0x10c vfs_writev+0x134/0x1cc do_writev+0xbc/0x130 __arm64_sys_writev+0x58/0x8c el0_svc_handler+0x170/0x240 el0_sync_handler+0x150/0x250 el0_sync+0x164/0x180 -> #3 (&(&port->lock)->rlock){-.-.}: lock_acquire+0x320/0x360 _raw_spin_lock_irqsave+0x7c/0x9c tty_port_tty_get+0x24/0x60 tty_port_default_wakeup+0x1c/0x3c tty_port_tty_wakeup+0x34/0x40 uart_write_wakeup+0x28/0x44 pl011_tx_chars+0x1b8/0x270 pl011_start_tx+0x24/0x70 __uart_start+0x5c/0x68 uart_write+0x164/0x1c8 do_output_char+0x33c/0x348 n_tty_write+0x4bc/0x60c tty_write+0x338/0x474 redirected_tty_write+0xc0/0xdc do_loop_readv_writev+0x140/0x180 do_iter_write+0xe0/0x10c vfs_writev+0x134/0x1cc do_writev+0xbc/0x130 __arm64_sys_writev+0x58/0x8c el0_svc_handler+0x170/0x240 el0_sync_handler+0x150/0x250 el0_sync+0x164/0x180 -> #2 (&port_lock_key){-.-.}: lock_acquire+0x320/0x360 _raw_spin_lock+0x64/0x80 pl011_console_write+0xec/0x2cc console_unlock+0x794/0x96c vprintk_emit+0x260/0x31c vprintk_default+0x54/0x7c vprintk_func+0x218/0x254 printk+0x7c/0xa4 register_console+0x734/0x7b0 uart_add_one_port+0x734/0x834 pl011_register_port+0x6c/0xac sbsa_uart_probe+0x234/0x2ec platform_drv_probe+0xd4/0x124 really_probe+0x250/0x71c driver_probe_device+0xb4/0x200 __device_attach_driver+0xd8/0x188 bus_for_each_drv+0xbc/0x110 __device_attach+0x120/0x220 device_initial_probe+0x20/0x2c bus_probe_device+0x54/0x100 device_add+0xae8/0xc2c platform_device_add+0x278/0x3b8 platform_device_register_full+0x238/0x2ac acpi_create_platform_device+0x2dc/0x3a8 acpi_bus_attach+0x390/0x3cc acpi_bus_attach+0x108/0x3cc acpi_bus_attach+0x108/0x3cc acpi_bus_attach+0x108/0x3cc acpi_bus_scan+0x7c/0xb0 acpi_scan_init+0xe4/0x304 acpi_init+0x100/0x114 do_one_initcall+0x348/0x6a0 do_initcall_level+0x190/0x1fc do_basic_setup+0x34/0x4c kernel_init_freeable+0x19c/0x260 kernel_init+0x18/0x338 ret_from_fork+0x10/0x18 -> #1 (console_owner){-...}: lock_acquire+0x320/0x360 console_lock_spinning_enable+0x6c/0x7c console_unlock+0x4f8/0x96c vprintk_emit+0x260/0x31c vprintk_default+0x54/0x7c vprintk_func+0x218/0x254 printk+0x7c/0xa4 get_random_u64+0x1c4/0x1dc shuffle_pick_tail+0x40/0xac __free_one_page+0x424/0x710 free_one_page+0x70/0x120 __free_pages_ok+0x61c/0xa94 __free_pages_core+0x1bc/0x294 memblock_free_pages+0x38/0x48 __free_pages_memory+0xcc/0xfc __free_memory_core+0x70/0x78 free_low_memory_core_early+0x148/0x18c memblock_free_all+0x18/0x54 mem_init+0xb4/0x17c mm_init+0x14/0x38 start_kernel+0x19c/0x530 -> #0 (&(&zone->lock)->rlock){..-.}: validate_chain+0xf6c/0x2e2c __lock_acquire+0x868/0xc2c lock_acquire+0x320/0x360 _raw_spin_lock+0x64/0x80 rmqueue+0x138/0x2050 get_page_from_freelist+0x474/0x688 __alloc_pages_nodemask+0x3b4/0x18dc alloc_pages_current+0xd0/0xe0 alloc_slab_page+0x2b4/0x5e0 new_slab+0xc8/0x6bc ___slab_alloc+0x3b8/0x640 kmem_cache_alloc+0x4b4/0x588 __debug_object_init+0x778/0x8b4 debug_object_init_on_stack+0x40/0x50 start_flush_work+0x16c/0x3f0 __flush_work+0xb8/0x124 flush_work+0x20/0x30 xlog_cil_force_lsn+0x88/0x204 [xfs] xfs_log_force_lsn+0x128/0x1b8 [xfs] xfs_file_fsync+0x3c4/0x488 [xfs] vfs_fsync_range+0xb0/0xd0 generic_write_sync+0x80/0xa0 [xfs] xfs_file_buffered_aio_write+0x66c/0x6e4 [xfs] xfs_file_write_iter+0x1a0/0x218 [xfs] __vfs_write+0x1cc/0x214 vfs_write+0x12c/0x1a4 ksys_write+0xb0/0x120 __arm64_sys_write+0x54/0x88 el0_svc_handler+0x170/0x240 el0_sync_handler+0x150/0x250 el0_sync+0x164/0x180 other info that might help us debug this: Chain exists of: &(&zone->lock)->rlock --> &(&port->lock)->rlock --> &pool->lock/1 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pool->lock/1); lock(&(&port->lock)->rlock); lock(&pool->lock/1); lock(&(&zone->lock)->rlock); *** DEADLOCK *** 4 locks held by doio/49441: #0: a0ff00886fc27408 (sb_writers#8){.+.+}, at: vfs_write+0x118/0x1a4 #1: 8fff00080810dfe0 (&xfs_nondir_ilock_class){++++}, at: xfs_ilock+0x2a8/0x300 [xfs] #2: ffff9000129f2390 (rcu_read_lock){....}, at: rcu_lock_acquire+0x8/0x38 #3: 60ff000822352818 (&pool->lock/1){-.-.}, at: start_flush_work+0xd8/0x3f0 stack backtrace: CPU: 48 PID: 49441 Comm: doio Tainted: G W Hardware name: HPE Apollo 70 /C01_APACHE_MB , BIOS L50_5.13_1.11 06/18/2019 Call trace: dump_backtrace+0x0/0x248 show_stack+0x20/0x2c dump_stack+0xe8/0x150 print_circular_bug+0x368/0x380 check_noncircular+0x28c/0x294 validate_chain+0xf6c/0x2e2c __lock_acquire+0x868/0xc2c lock_acquire+0x320/0x360 _raw_spin_lock+0x64/0x80 rmqueue+0x138/0x2050 get_page_from_freelist+0x474/0x688 __alloc_pages_nodemask+0x3b4/0x18dc alloc_pages_current+0xd0/0xe0 alloc_slab_page+0x2b4/0x5e0 new_slab+0xc8/0x6bc ___slab_alloc+0x3b8/0x640 kmem_cache_alloc+0x4b4/0x588 __debug_object_init+0x778/0x8b4 debug_object_init_on_stack+0x40/0x50 start_flush_work+0x16c/0x3f0 __flush_work+0xb8/0x124 flush_work+0x20/0x30 xlog_cil_force_lsn+0x88/0x204 [xfs] xfs_log_force_lsn+0x128/0x1b8 [xfs] xfs_file_fsync+0x3c4/0x488 [xfs] vfs_fsync_range+0xb0/0xd0 generic_write_sync+0x80/0xa0 [xfs] xfs_file_buffered_aio_write+0x66c/0x6e4 [xfs] xfs_file_write_iter+0x1a0/0x218 [xfs] __vfs_write+0x1cc/0x214 vfs_write+0x12c/0x1a4 ksys_write+0xb0/0x120 __arm64_sys_write+0x54/0x88 el0_svc_handler+0x170/0x240 el0_sync_handler+0x150/0x250 el0_sync+0x164/0x180 Reviewed-by: Sergey Senozhatsky Signed-off-by: Qian Cai Link: https://lore.kernel.org/r/1573679785-21068-1-git-send-email-cai@lca.pw Signed-off-by: Theodore Ts'o Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 876736acbe0041f40a5ebe20e16aa8e4d481abc6 Author: Sebastian Andrzej Siewior Date: Sat Apr 20 00:09:51 2019 -0400 random: add a spinlock_t to struct batched_entropy [ Upstream commit b7d5dc21072cda7124d13eae2aefb7343ef94197 ] The per-CPU variable batched_entropy_uXX is protected by get_cpu_var(). This is just a preempt_disable() which ensures that the variable is only from the local CPU. It does not protect against users on the same CPU from another context. It is possible that a preemptible context reads slot 0 and then an interrupt occurs and the same value is read again. The above scenario is confirmed by lockdep if we add a spinlock: | ================================ | WARNING: inconsistent lock state | 5.1.0-rc3+ #42 Not tainted | -------------------------------- | inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. | ksoftirqd/9/56 [HC0[0]:SC1[1]:HE0:SE0] takes: | (____ptrval____) (batched_entropy_u32.lock){+.?.}, at: get_random_u32+0x3e/0xe0 | {SOFTIRQ-ON-W} state was registered at: | _raw_spin_lock+0x2a/0x40 | get_random_u32+0x3e/0xe0 | new_slab+0x15c/0x7b0 | ___slab_alloc+0x492/0x620 | __slab_alloc.isra.73+0x53/0xa0 | kmem_cache_alloc_node+0xaf/0x2a0 | copy_process.part.41+0x1e1/0x2370 | _do_fork+0xdb/0x6d0 | kernel_thread+0x20/0x30 | kthreadd+0x1ba/0x220 | ret_from_fork+0x3a/0x50 … | other info that might help us debug this: | Possible unsafe locking scenario: | | CPU0 | ---- | lock(batched_entropy_u32.lock); | | lock(batched_entropy_u32.lock); | | *** DEADLOCK *** | | stack backtrace: | Call Trace: … | kmem_cache_alloc_trace+0x20e/0x270 | ipmi_alloc_recv_msg+0x16/0x40 … | __do_softirq+0xec/0x48d | run_ksoftirqd+0x37/0x60 | smpboot_thread_fn+0x191/0x290 | kthread+0xfe/0x130 | ret_from_fork+0x3a/0x50 Add a spinlock_t to the batched_entropy data structure and acquire the lock while accessing it. Acquire the lock with disabled interrupts because this function may be used from interrupt context. Remove the batched_entropy_reset_lock lock. Now that we have a lock for the data scructure, we can access it from a remote CPU. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Theodore Ts'o Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman commit 44ea43bef703ad72d3e1999a319bbb84c6d55876 Author: Theodore Ts'o Date: Wed Apr 25 01:12:32 2018 -0400 random: rate limit unseeded randomness warnings commit 4e00b339e264802851aff8e73cde7d24b57b18ce upstream. On systems without sufficient boot randomness, no point spamming dmesg. Signed-off-by: Theodore Ts'o Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 32988e2d4ef3a535a7e61f0beb920d81c7eb597b Author: Theodore Ts'o Date: Mon Apr 23 18:51:28 2018 -0400 random: fix possible sleeping allocation from irq context commit 6c1e851c4edc13a43adb3ea4044e3fc8f43ccf7d upstream. We can do a sleeping allocation from an irq context when CONFIG_NUMA is enabled. Fix this by initializing the NUMA crng instances in a workqueue. Reported-by: Tetsuo Handa Reported-by: syzbot+9de458f6a5e713ee8c1a@syzkaller.appspotmail.com Fixes: 8ef35c866f8862df ("random: set up the NUMA crng instances...") Cc: stable@vger.kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 13eec12cfa4ffda624e289e27c2ea53d3c5ed70b Author: Theodore Ts'o Date: Wed Apr 11 15:23:56 2018 -0400 random: set up the NUMA crng instances after the CRNG is fully initialized commit 8ef35c866f8862df074a49a93b0309725812dea8 upstream. Until the primary_crng is fully initialized, don't initialize the NUMA crng nodes. Otherwise users of /dev/urandom on NUMA systems before the CRNG is fully initialized can get very bad quality randomness. Of course everyone should move to getrandom(2) where this won't be an issue, but there's a lot of legacy code out there. This related to CVE-2018-1108. Reported-by: Jann Horn Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly...") Cc: stable@kernel.org # 4.8+ Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 0467c15b092a83a55a5a1746cc8e49107009bfcf Author: Theodore Ts'o Date: Wed Apr 11 14:58:27 2018 -0400 random: use a different mixing algorithm for add_device_randomness() commit dc12baacb95f205948f64dc936a47d89ee110117 upstream. add_device_randomness() use of crng_fast_load() was highly problematic. Some callers of add_device_randomness() can pass in a large amount of static information. This would immediately promote the crng_init state from 0 to 1, without really doing much to initialize the primary_crng's internal state with something even vaguely unpredictable. Since we don't have the speed constraints of add_interrupt_randomness(), we can do a better job mixing in the what unpredictability a device driver or architecture maintainer might see fit to give us, and do it in a way which does not bump the crng_init_cnt variable. Also, since add_device_randomness() doesn't bump any entropy accounting in crng_init state 0, mix the device randomness into the input_pool entropy pool as well. This is related to CVE-2018-1108. Reported-by: Jann Horn Fixes: ee7998c50c26 ("random: do not ignore early device randomness") Cc: stable@kernel.org # 4.13+ Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman commit 10384737546558e17bba2c5fa295641813baae14 Author: Helge Deller Date: Tue Aug 8 18:28:41 2017 +0200 random: fix warning message on ia64 and parisc commit 51d96dc2e2dc2cf9b81cf976cc93c51ba3ac2f92 upstream. Fix the warning message on the parisc and IA64 architectures to show the correct function name of the caller by using %pS instead of %pF. The message is printed with the value of _RET_IP_ which calls __builtin_return_address(0) and as such returns the IP address caller instead of pointer to a function descriptor of the caller. The effect of this patch is visible on the parisc and ia64 architectures only since those are the ones which use function descriptors while on all others %pS and %pF will behave the same. Cc: Theodore Ts'o Cc: Jason A. Donenfeld Signed-off-by: Helge Deller Fixes: eecabf567422 ("random: suppress spammy warnings about unseeded randomness") Fixes: d06bfd1989fe ("random: warn when kernel uses unseeded randomness") Signed-off-by: Linus Torvalds Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 2235bed1ee9ac2a329646adf3ed37d50281c10b6 Author: Sebastian Andrzej Siewior Date: Fri Jun 30 16:37:13 2017 +0200 random: reorder READ_ONCE() in get_random_uXX commit 72e5c740f6335e27253b8ff64d23d00337091535 upstream. Avoid the READ_ONCE in commit 4a072c71f49b ("random: silence compiler warnings and fix race") if we can leave the function after arch_get_random_XXX(). Cc: Jason A. Donenfeld Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 68b20d5c9710665d72c10696412e92cc03547b0f Author: Theodore Ts'o Date: Thu Jun 8 04:16:59 2017 -0400 random: suppress spammy warnings about unseeded randomness commit eecabf567422eda02bd179f2707d8fe24f52d888 upstream. Unfortunately, on some models of some architectures getting a fully seeded CRNG is extremely difficult, and so this can result in dmesg getting spammed for a surprisingly long time. This is really bad from a security perspective, and so architecture maintainers really need to do what they can to get the CRNG seeded sooner after the system is booted. However, users can't do anything actionble to address this, and spamming the kernel messages log will only just annoy people. For developers who want to work on improving this situation, CONFIG_WARN_UNSEEDED_RANDOM has been renamed to CONFIG_WARN_ALL_UNSEEDED_RANDOM. By default the kernel will always print the first use of unseeded randomness. This way, hopefully the security obsessed will be happy that there is _some_ indication when the kernel boots there may be a potential issue with that architecture or subarchitecture. To see all uses of unseeded randomness, developers can enable CONFIG_WARN_ALL_UNSEEDED_RANDOM. Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 8f7353468904507e0050e6cff4ed952502b741eb Author: Kees Cook Date: Wed Jul 12 14:34:04 2017 -0700 random: do not ignore early device randomness commit ee7998c50c2697737c6530431709f77c852bf0d6 upstream. The add_device_randomness() function would ignore incoming bytes if the crng wasn't ready. This additionally makes sure to make an early enough call to add_latent_entropy() to influence the initial stack canary, which is especially important on non-x86 systems where it stays the same through the life of the boot. Link: http://lkml.kernel.org/r/20170626233038.GA48751@beast Signed-off-by: Kees Cook Cc: "Theodore Ts'o" Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Cc: Ingo Molnar Cc: Jessica Yu Cc: Steven Rostedt (VMware) Cc: Viresh Kumar Cc: Tejun Heo Cc: Prarit Bhargava Cc: Lokesh Vutla Cc: Nicholas Piggin Cc: AKASHI Takahiro Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 04e5bfa62023e559cbb78a432ff199bd9b225989 Author: Jason A. Donenfeld Date: Wed Jun 7 23:06:55 2017 -0400 random: warn when kernel uses unseeded randomness commit d06bfd1989fe97623b32d6df4ffa6e4338c99dc8 upstream. This enables an important dmesg notification about when drivers have used the crng without it being seeded first. Prior, these errors would occur silently, and so there hasn't been a great way of diagnosing these types of bugs for obscure setups. By adding this as a config option, we can leave it on by default, so that we learn where these issues happen, in the field, will still allowing some people to turn it off, if they really know what they're doing and do not want the log entries. However, we don't leave it _completely_ by default. An earlier version of this patch simply had `default y`. I'd really love that, but it turns out, this problem with unseeded randomness being used is really quite present and is going to take a long time to fix. Thus, as a compromise between log-messages-for-all and nobody-knows, this is `default y`, except it is also `depends on DEBUG_KERNEL`. This will ensure that the curious see the messages while others don't have to. Signed-off-by: Jason A. Donenfeld Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit ca8f215ec54db947123e51d4dfede8b07daf40e9 Author: Jason A. Donenfeld Date: Wed Jun 7 20:05:02 2017 -0400 random: add get_random_{bytes,u32,u64,int,long,once}_wait family commit da9ba564bd683374b8d319756f312821b8265b06 upstream. These functions are simple convenience wrappers that call wait_for_random_bytes before calling the respective get_random_* function. Signed-off-by: Jason A. Donenfeld Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 89548752d85e97bba241cf2c3299e8f3bac17007 Author: Jason A. Donenfeld Date: Wed Jun 7 19:58:56 2017 -0400 random: add wait_for_random_bytes() API commit e297a783e41560b44e3c14f38e420cba518113b8 upstream. This enables users of get_random_{bytes,u32,u64,int,long} to wait until the pool is ready before using this function, in case they actually want to have reliable randomness. Signed-off-by: Jason A. Donenfeld Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 260fa1f3183a8453c251f018f87a3a34b02418fb Author: Jason A. Donenfeld Date: Thu Jun 15 00:45:26 2017 +0200 random: silence compiler warnings and fix race commit 4a072c71f49b0a0e495ea13423bdb850da73c58c upstream. Odd versions of gcc for the sh4 architecture will actually warn about flags being used while uninitialized, so we set them to zero. Non crazy gccs will optimize that out again, so it doesn't make a difference. Next, over aggressive gccs could inline the expression that defines use_lock, which could then introduce a race resulting in a lock imbalance. By using READ_ONCE, we prevent that fate. Finally, we make that assignment const, so that gcc can still optimize a nice amount. Finally, we fix a potential deadlock between primary_crng.lock and batched_entropy_reset_lock, where they could be called in opposite order. Moving the call to invalidate_batched_entropy to outside the lock rectifies this issue. Fixes: b169c13de473a85b3c859bb36216a4cb5f00a54a Signed-off-by: Jason A. Donenfeld Signed-off-by: Theodore Ts'o Cc: stable@vger.kernel.org Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 67d99815704116c3b9394974515b6633d99b25e2 Author: Jason A. Donenfeld Date: Wed Jun 7 19:45:31 2017 -0400 random: invalidate batched entropy after crng init commit b169c13de473a85b3c859bb36216a4cb5f00a54a upstream. It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. Signed-off-by: Jason A. Donenfeld Cc: Greg Kroah-Hartman Signed-off-by: Theodore Ts'o Cc: stable@vger.kernel.org # v4.11+ Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9d037fdd5f86c4e97c2f049547dbc85d6224456a Author: Fabio Estevam Date: Tue Jan 31 14:36:07 2017 -0200 random: move random_min_urandom_seed into CONFIG_SYSCTL ifdef block commit db61ffe3a71c697aaa91c42c862a5f7557a0e562 upstream. Building arm allnodefconfig causes the following build warning: drivers/char/random.c:318:12: warning: 'random_min_urandom_seed' defined but not used [-Wunused-variable] Fix the warning by moving 'random_min_urandom_seed' declaration inside the CONFIG_SYSCTL ifdef block, where it is actually used. While at it, remove the comment prior to the variable declaration. Signed-off-by: Fabio Estevam Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7c8ed90ae33f5fd363839beacd5f1120343f5efd Author: Jason A. Donenfeld Date: Sun Jan 22 16:34:08 2017 +0100 random: convert get_random_int/long into get_random_u32/u64 commit c440408cf6901eeb2c09563397e24a9097907078 upstream. Many times, when a user wants a random number, he wants a random number of a guaranteed size. So, thinking of get_random_int and get_random_long in terms of get_random_u32 and get_random_u64 makes it much easier to achieve this. It also makes the code simpler. On 32-bit platforms, get_random_int and get_random_long are both aliased to get_random_u32. On 64-bit platforms, int->u32 and long->u64. Signed-off-by: Jason A. Donenfeld Cc: Greg Kroah-Hartman Cc: Theodore Ts'o Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d8abc2bdc5d6c808eba329205824984c3f91df03 Author: Stephan Müller Date: Tue Dec 27 23:41:22 2016 +0100 random: fix comment for unused random_min_urandom_seed commit 5d0e5ea343a0f70351428476bcf8715e0731f26a upstream. The variable random_min_urandom_seed is not needed any more as it defined the reseeding behavior of the nonblocking pool. Though it is not needed any more, it is left in the code for user space interface compatibility. Signed-off-by: Stephan Mueller Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 84818eedf494b305a3405da19123efe1d17adb2b Author: Stephan Müller Date: Tue Dec 27 23:40:59 2016 +0100 random: remove variable limit commit 43d8a72cd985ca5279a9eb84d61fcbb3ee3d3774 upstream. The variable limit was used to identify the nonblocking pool's unlimited random number generation. As the nonblocking pool is a thing of the past, remove the limit variable and any conditions around it (i.e. preserve the branches for limit == 1). Signed-off-by: Stephan Mueller Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e48be7ff82f5c0b2fbe9f281bcff9f23378a6ba8 Author: Stephan Müller Date: Tue Dec 27 23:39:31 2016 +0100 random: remove stale urandom_init_wait commit 2e03c36f25ebb52d3358b8baebcdf96895c33a87 upstream. The urandom_init_wait wait queue is a left over from the pre-ChaCha20 times and can therefore be savely removed. Signed-off-by: Stephan Mueller Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a86b868cc04ec0c40a9a9b077cd01ece4b228550 Author: Stephan Mueller Date: Thu Dec 15 12:42:33 2016 +0100 random: remove stale maybe_reseed_primary_crng commit 3d071d8da1f586c24863a57349586a1611b9aa67 upstream. The function maybe_reseed_primary_crng is not used anywhere and thus can be removed. Signed-off-by: Stephan Mueller Signed-off-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit fe7cde423488c55ef9dfe1ac8c87ed9af62aca2b Author: Al Viro Date: Sun Jan 31 14:37:39 2021 -0500 9p: missing chunk of "fs/9p: Don't update file type when updating file attributes" commit b577d0cd2104fdfcf0ded3707540a12be8ddd8b0 upstream. In commit 45089142b149 Aneesh had missed one (admittedly, very unlikely to hit) case in v9fs_stat2inode_dotl(). However, the same considerations apply there as well - we have no business whatsoever to change ->i_rdev or the file type. Cc: Tadeusz Struk Signed-off-by: Al Viro Signed-off-by: Greg Kroah-Hartman