| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: wilc1000: avoid buffer overflow in WID string configuration
Fix the following copy overflow warning identified by Smatch checker.
drivers/net/wireless/microchip/wilc1000/wlan_cfg.c:184 wilc_wlan_parse_response_frame()
error: '__memcpy()' 'cfg->s[i]->str' copy overflow (512 vs 65537)
This patch introduces size check before accessing the memory buffer.
The checks are base on the WID type of received data from the firmware.
For WID string configuration, the size limit is determined by individual
element size in 'struct wilc_cfg_str_vals' that is maintained in 'len' field
of 'struct wilc_cfg_str'. |
| In the Linux kernel, the following vulnerability has been resolved:
dm-stripe: fix a possible integer overflow
There's a possible integer overflow in stripe_io_hints if we have too
large chunk size. Test if the overflow happened, and if it did, don't set
limits->io_min and limits->io_opt; |
| In the Linux kernel, the following vulnerability has been resolved:
fs: writeback: fix use-after-free in __mark_inode_dirty()
An use-after-free issue occurred when __mark_inode_dirty() get the
bdi_writeback that was in the progress of switching.
CPU: 1 PID: 562 Comm: systemd-random- Not tainted 6.6.56-gb4403bd46a8e #1
......
pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : __mark_inode_dirty+0x124/0x418
lr : __mark_inode_dirty+0x118/0x418
sp : ffffffc08c9dbbc0
........
Call trace:
__mark_inode_dirty+0x124/0x418
generic_update_time+0x4c/0x60
file_modified+0xcc/0xd0
ext4_buffered_write_iter+0x58/0x124
ext4_file_write_iter+0x54/0x704
vfs_write+0x1c0/0x308
ksys_write+0x74/0x10c
__arm64_sys_write+0x1c/0x28
invoke_syscall+0x48/0x114
el0_svc_common.constprop.0+0xc0/0xe0
do_el0_svc+0x1c/0x28
el0_svc+0x40/0xe4
el0t_64_sync_handler+0x120/0x12c
el0t_64_sync+0x194/0x198
Root cause is:
systemd-random-seed kworker
----------------------------------------------------------------------
___mark_inode_dirty inode_switch_wbs_work_fn
spin_lock(&inode->i_lock);
inode_attach_wb
locked_inode_to_wb_and_lock_list
get inode->i_wb
spin_unlock(&inode->i_lock);
spin_lock(&wb->list_lock)
spin_lock(&inode->i_lock)
inode_io_list_move_locked
spin_unlock(&wb->list_lock)
spin_unlock(&inode->i_lock)
spin_lock(&old_wb->list_lock)
inode_do_switch_wbs
spin_lock(&inode->i_lock)
inode->i_wb = new_wb
spin_unlock(&inode->i_lock)
spin_unlock(&old_wb->list_lock)
wb_put_many(old_wb, nr_switched)
cgwb_release
old wb released
wb_wakeup_delayed() accesses wb,
then trigger the use-after-free
issue
Fix this race condition by holding inode spinlock until
wb_wakeup_delayed() finished. |
| In the Linux kernel, the following vulnerability has been resolved:
usb: dwc3: Remove WARN_ON for device endpoint command timeouts
This commit addresses a rarely observed endpoint command timeout
which causes kernel panic due to warn when 'panic_on_warn' is enabled
and unnecessary call trace prints when 'panic_on_warn' is disabled.
It is seen during fast software-controlled connect/disconnect testcases.
The following is one such endpoint command timeout that we observed:
1. Connect
=======
->dwc3_thread_interrupt
->dwc3_ep0_interrupt
->configfs_composite_setup
->composite_setup
->usb_ep_queue
->dwc3_gadget_ep0_queue
->__dwc3_gadget_ep0_queue
->__dwc3_ep0_do_control_data
->dwc3_send_gadget_ep_cmd
2. Disconnect
==========
->dwc3_thread_interrupt
->dwc3_gadget_disconnect_interrupt
->dwc3_ep0_reset_state
->dwc3_ep0_end_control_data
->dwc3_send_gadget_ep_cmd
In the issue scenario, in Exynos platforms, we observed that control
transfers for the previous connect have not yet been completed and end
transfer command sent as a part of the disconnect sequence and
processing of USB_ENDPOINT_HALT feature request from the host timeout.
This maybe an expected scenario since the controller is processing EP
commands sent as a part of the previous connect. It maybe better to
remove WARN_ON in all places where device endpoint commands are sent to
avoid unnecessary kernel panic due to warn. |
| In the Linux kernel, the following vulnerability has been resolved:
xfrm: Duplicate SPI Handling
The issue originates when Strongswan initiates an XFRM_MSG_ALLOCSPI
Netlink message, which triggers the kernel function xfrm_alloc_spi().
This function is expected to ensure uniqueness of the Security Parameter
Index (SPI) for inbound Security Associations (SAs). However, it can
return success even when the requested SPI is already in use, leading
to duplicate SPIs assigned to multiple inbound SAs, differentiated
only by their destination addresses.
This behavior causes inconsistencies during SPI lookups for inbound packets.
Since the lookup may return an arbitrary SA among those with the same SPI,
packet processing can fail, resulting in packet drops.
According to RFC 4301 section 4.4.2 , for inbound processing a unicast SA
is uniquely identified by the SPI and optionally protocol.
Reproducing the Issue Reliably:
To consistently reproduce the problem, restrict the available SPI range in
charon.conf : spi_min = 0x10000000 spi_max = 0x10000002
This limits the system to only 2 usable SPI values.
Next, create more than 2 Child SA. each using unique pair of src/dst address.
As soon as the 3rd Child SA is initiated, it will be assigned a duplicate
SPI, since the SPI pool is already exhausted.
With a narrow SPI range, the issue is consistently reproducible.
With a broader/default range, it becomes rare and unpredictable.
Current implementation:
xfrm_spi_hash() lookup function computes hash using daddr, proto, and family.
So if two SAs have the same SPI but different destination addresses, then
they will:
a. Hash into different buckets
b. Be stored in different linked lists (byspi + h)
c. Not be seen in the same hlist_for_each_entry_rcu() iteration.
As a result, the lookup will result in NULL and kernel allows that Duplicate SPI
Proposed Change:
xfrm_state_lookup_spi_proto() does a truly global search - across all states,
regardless of hash bucket and matches SPI and proto. |
| In the Linux kernel, the following vulnerability has been resolved:
ARM: tegra: Use I/O memcpy to write to IRAM
Kasan crashes the kernel trying to check boundaries when using the
normal memcpy. |
| In the Linux kernel, the following vulnerability has been resolved:
parisc: Drop WARN_ON_ONCE() from flush_cache_vmap
I have observed warning to occassionally trigger. |
| In the Linux kernel, the following vulnerability has been resolved:
ACPI: APEI: send SIGBUS to current task if synchronous memory error not recovered
If a synchronous error is detected as a result of user-space process
triggering a 2-bit uncorrected error, the CPU will take a synchronous
error exception such as Synchronous External Abort (SEA) on Arm64. The
kernel will queue a memory_failure() work which poisons the related
page, unmaps the page, and then sends a SIGBUS to the process, so that
a system wide panic can be avoided.
However, no memory_failure() work will be queued when abnormal
synchronous errors occur. These errors can include situations like
invalid PA, unexpected severity, no memory failure config support,
invalid GUID section, etc. In such a case, the user-space process will
trigger SEA again. This loop can potentially exceed the platform
firmware threshold or even trigger a kernel hard lockup, leading to a
system reboot.
Fix it by performing a force kill if no memory_failure() work is queued
for synchronous errors.
[ rjw: Changelog edits ] |
| In the Linux kernel, the following vulnerability has been resolved:
usb: core: config: Prevent OOB read in SS endpoint companion parsing
usb_parse_ss_endpoint_companion() checks descriptor type before length,
enabling a potentially odd read outside of the buffer size.
Fix this up by checking the size first before looking at any of the
fields in the descriptor. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Forget ranges when refining tnum after JSET
Syzbot reported a kernel warning due to a range invariant violation on
the following BPF program.
0: call bpf_get_netns_cookie
1: if r0 == 0 goto <exit>
2: if r0 & Oxffffffff goto <exit>
The issue is on the path where we fall through both jumps.
That path is unreachable at runtime: after insn 1, we know r0 != 0, but
with the sign extension on the jset, we would only fallthrough insn 2
if r0 == 0. Unfortunately, is_branch_taken() isn't currently able to
figure this out, so the verifier walks all branches. The verifier then
refines the register bounds using the second condition and we end
up with inconsistent bounds on this unreachable path:
1: if r0 == 0 goto <exit>
r0: u64=[0x1, 0xffffffffffffffff] var_off=(0, 0xffffffffffffffff)
2: if r0 & 0xffffffff goto <exit>
r0 before reg_bounds_sync: u64=[0x1, 0xffffffffffffffff] var_off=(0, 0)
r0 after reg_bounds_sync: u64=[0x1, 0] var_off=(0, 0)
Improving the range refinement for JSET to cover all cases is tricky. We
also don't expect many users to rely on JSET given LLVM doesn't generate
those instructions. So instead of improving the range refinement for
JSETs, Eduard suggested we forget the ranges whenever we're narrowing
tnums after a JSET. This patch implements that approach. |
| In the Linux kernel, the following vulnerability has been resolved:
rcutorture: Fix rcutorture_one_extend_check() splat in RT kernels
For built with CONFIG_PREEMPT_RT=y kernels, running rcutorture
tests resulted in the following splat:
[ 68.797425] rcutorture_one_extend_check during change: Current 0x1 To add 0x1 To remove 0x0 preempt_count() 0x0
[ 68.797533] WARNING: CPU: 2 PID: 512 at kernel/rcu/rcutorture.c:1993 rcutorture_one_extend_check+0x419/0x560 [rcutorture]
[ 68.797601] Call Trace:
[ 68.797602] <TASK>
[ 68.797619] ? lockdep_softirqs_off+0xa5/0x160
[ 68.797631] rcutorture_one_extend+0x18e/0xcc0 [rcutorture 2466dbd2ff34dbaa36049cb323a80c3306ac997c]
[ 68.797646] ? local_clock+0x19/0x40
[ 68.797659] rcu_torture_one_read+0xf0/0x280 [rcutorture 2466dbd2ff34dbaa36049cb323a80c3306ac997c]
[ 68.797678] ? __pfx_rcu_torture_one_read+0x10/0x10 [rcutorture 2466dbd2ff34dbaa36049cb323a80c3306ac997c]
[ 68.797804] ? __pfx_rcu_torture_timer+0x10/0x10 [rcutorture 2466dbd2ff34dbaa36049cb323a80c3306ac997c]
[ 68.797815] rcu-torture: rcu_torture_reader task started
[ 68.797824] rcu-torture: Creating rcu_torture_reader task
[ 68.797824] rcu_torture_reader+0x238/0x580 [rcutorture 2466dbd2ff34dbaa36049cb323a80c3306ac997c]
[ 68.797836] ? kvm_sched_clock_read+0x15/0x30
Disable BH does not change the SOFTIRQ corresponding bits in
preempt_count() for RT kernels, this commit therefore use
softirq_count() to check the if BH is disabled. |
| In the Linux kernel, the following vulnerability has been resolved:
rcu: Fix rcu_read_unlock() deadloop due to IRQ work
During rcu_read_unlock_special(), if this happens during irq_exit(), we
can lockup if an IPI is issued. This is because the IPI itself triggers
the irq_exit() path causing a recursive lock up.
This is precisely what Xiongfeng found when invoking a BPF program on
the trace_tick_stop() tracepoint As shown in the trace below. Fix by
managing the irq_work state correctly.
irq_exit()
__irq_exit_rcu()
/* in_hardirq() returns false after this */
preempt_count_sub(HARDIRQ_OFFSET)
tick_irq_exit()
tick_nohz_irq_exit()
tick_nohz_stop_sched_tick()
trace_tick_stop() /* a bpf prog is hooked on this trace point */
__bpf_trace_tick_stop()
bpf_trace_run2()
rcu_read_unlock_special()
/* will send a IPI to itself */
irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
A simple reproducer can also be obtained by doing the following in
tick_irq_exit(). It will hang on boot without the patch:
static inline void tick_irq_exit(void)
{
+ rcu_read_lock();
+ WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true);
+ rcu_read_unlock();
+
[neeraj: Apply Frederic's suggested fix for PREEMPT_RT] |
| In the Linux kernel, the following vulnerability has been resolved:
mm/kmemleak: avoid soft lockup in __kmemleak_do_cleanup()
A soft lockup warning was observed on a relative small system x86-64
system with 16 GB of memory when running a debug kernel with kmemleak
enabled.
watchdog: BUG: soft lockup - CPU#8 stuck for 33s! [kworker/8:1:134]
The test system was running a workload with hot unplug happening in
parallel. Then kemleak decided to disable itself due to its inability to
allocate more kmemleak objects. The debug kernel has its
CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE set to 40,000.
The soft lockup happened in kmemleak_do_cleanup() when the existing
kmemleak objects were being removed and deleted one-by-one in a loop via a
workqueue. In this particular case, there are at least 40,000 objects
that need to be processed and given the slowness of a debug kernel and the
fact that a raw_spinlock has to be acquired and released in
__delete_object(), it could take a while to properly handle all these
objects.
As kmemleak has been disabled in this case, the object removal and
deletion process can be further optimized as locking isn't really needed.
However, it is probably not worth the effort to optimize for such an edge
case that should rarely happen. So the simple solution is to call
cond_resched() at periodic interval in the iteration loop to avoid soft
lockup. |
| In the Linux kernel, the following vulnerability has been resolved:
hfs: fix general protection fault in hfs_find_init()
The hfs_find_init() method can trigger the crash
if tree pointer is NULL:
[ 45.746290][ T9787] Oops: general protection fault, probably for non-canonical address 0xdffffc0000000008: 0000 [#1] SMP KAI
[ 45.747287][ T9787] KASAN: null-ptr-deref in range [0x0000000000000040-0x0000000000000047]
[ 45.748716][ T9787] CPU: 2 UID: 0 PID: 9787 Comm: repro Not tainted 6.16.0-rc3 #10 PREEMPT(full)
[ 45.750250][ T9787] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 45.751983][ T9787] RIP: 0010:hfs_find_init+0x86/0x230
[ 45.752834][ T9787] Code: c1 ea 03 80 3c 02 00 0f 85 9a 01 00 00 4c 8d 6b 40 48 c7 45 18 00 00 00 00 48 b8 00 00 00 00 00 fc
[ 45.755574][ T9787] RSP: 0018:ffffc90015157668 EFLAGS: 00010202
[ 45.756432][ T9787] RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffffff819a4d09
[ 45.757457][ T9787] RDX: 0000000000000008 RSI: ffffffff819acd3a RDI: ffffc900151576e8
[ 45.758282][ T9787] RBP: ffffc900151576d0 R08: 0000000000000005 R09: 0000000000000000
[ 45.758943][ T9787] R10: 0000000080000000 R11: 0000000000000001 R12: 0000000000000004
[ 45.759619][ T9787] R13: 0000000000000040 R14: ffff88802c50814a R15: 0000000000000000
[ 45.760293][ T9787] FS: 00007ffb72734540(0000) GS:ffff8880cec64000(0000) knlGS:0000000000000000
[ 45.761050][ T9787] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 45.761606][ T9787] CR2: 00007f9bd8225000 CR3: 000000010979a000 CR4: 00000000000006f0
[ 45.762286][ T9787] Call Trace:
[ 45.762570][ T9787] <TASK>
[ 45.762824][ T9787] hfs_ext_read_extent+0x190/0x9d0
[ 45.763269][ T9787] ? submit_bio_noacct_nocheck+0x2dd/0xce0
[ 45.763766][ T9787] ? __pfx_hfs_ext_read_extent+0x10/0x10
[ 45.764250][ T9787] hfs_get_block+0x55f/0x830
[ 45.764646][ T9787] block_read_full_folio+0x36d/0x850
[ 45.765105][ T9787] ? __pfx_hfs_get_block+0x10/0x10
[ 45.765541][ T9787] ? const_folio_flags+0x5b/0x100
[ 45.765972][ T9787] ? __pfx_hfs_read_folio+0x10/0x10
[ 45.766415][ T9787] filemap_read_folio+0xbe/0x290
[ 45.766840][ T9787] ? __pfx_filemap_read_folio+0x10/0x10
[ 45.767325][ T9787] ? __filemap_get_folio+0x32b/0xbf0
[ 45.767780][ T9787] do_read_cache_folio+0x263/0x5c0
[ 45.768223][ T9787] ? __pfx_hfs_read_folio+0x10/0x10
[ 45.768666][ T9787] read_cache_page+0x5b/0x160
[ 45.769070][ T9787] hfs_btree_open+0x491/0x1740
[ 45.769481][ T9787] hfs_mdb_get+0x15e2/0x1fb0
[ 45.769877][ T9787] ? __pfx_hfs_mdb_get+0x10/0x10
[ 45.770316][ T9787] ? find_held_lock+0x2b/0x80
[ 45.770731][ T9787] ? lockdep_init_map_type+0x5c/0x280
[ 45.771200][ T9787] ? lockdep_init_map_type+0x5c/0x280
[ 45.771674][ T9787] hfs_fill_super+0x38e/0x720
[ 45.772092][ T9787] ? __pfx_hfs_fill_super+0x10/0x10
[ 45.772549][ T9787] ? snprintf+0xbe/0x100
[ 45.772931][ T9787] ? __pfx_snprintf+0x10/0x10
[ 45.773350][ T9787] ? do_raw_spin_lock+0x129/0x2b0
[ 45.773796][ T9787] ? find_held_lock+0x2b/0x80
[ 45.774215][ T9787] ? set_blocksize+0x40a/0x510
[ 45.774636][ T9787] ? sb_set_blocksize+0x176/0x1d0
[ 45.775087][ T9787] ? setup_bdev_super+0x369/0x730
[ 45.775533][ T9787] get_tree_bdev_flags+0x384/0x620
[ 45.775985][ T9787] ? __pfx_hfs_fill_super+0x10/0x10
[ 45.776453][ T9787] ? __pfx_get_tree_bdev_flags+0x10/0x10
[ 45.776950][ T9787] ? bpf_lsm_capable+0x9/0x10
[ 45.777365][ T9787] ? security_capable+0x80/0x260
[ 45.777803][ T9787] vfs_get_tree+0x8e/0x340
[ 45.778203][ T9787] path_mount+0x13de/0x2010
[ 45.778604][ T9787] ? kmem_cache_free+0x2b0/0x4c0
[ 45.779052][ T9787] ? __pfx_path_mount+0x10/0x10
[ 45.779480][ T9787] ? getname_flags.part.0+0x1c5/0x550
[ 45.779954][ T9787] ? putname+0x154/0x1a0
[ 45.780335][ T9787] __x64_sys_mount+0x27b/0x300
[ 45.780758][ T9787] ? __pfx___x64_sys_mount+0x10/0x10
[ 45.781232][ T9787]
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
hfs: fix slab-out-of-bounds in hfs_bnode_read()
This patch introduces is_bnode_offset_valid() method that checks
the requested offset value. Also, it introduces
check_and_correct_requested_length() method that checks and
correct the requested length (if it is necessary). These methods
are used in hfs_bnode_read(), hfs_bnode_write(), hfs_bnode_clear(),
hfs_bnode_copy(), and hfs_bnode_move() with the goal to prevent
the access out of allocated memory and triggering the crash. |
| In the Linux kernel, the following vulnerability has been resolved:
hfsplus: fix slab-out-of-bounds read in hfsplus_uni2asc()
The hfsplus_readdir() method is capable to crash by calling
hfsplus_uni2asc():
[ 667.121659][ T9805] ==================================================================
[ 667.122651][ T9805] BUG: KASAN: slab-out-of-bounds in hfsplus_uni2asc+0x902/0xa10
[ 667.123627][ T9805] Read of size 2 at addr ffff88802592f40c by task repro/9805
[ 667.124578][ T9805]
[ 667.124876][ T9805] CPU: 3 UID: 0 PID: 9805 Comm: repro Not tainted 6.16.0-rc3 #1 PREEMPT(full)
[ 667.124886][ T9805] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 667.124890][ T9805] Call Trace:
[ 667.124893][ T9805] <TASK>
[ 667.124896][ T9805] dump_stack_lvl+0x10e/0x1f0
[ 667.124911][ T9805] print_report+0xd0/0x660
[ 667.124920][ T9805] ? __virt_addr_valid+0x81/0x610
[ 667.124928][ T9805] ? __phys_addr+0xe8/0x180
[ 667.124934][ T9805] ? hfsplus_uni2asc+0x902/0xa10
[ 667.124942][ T9805] kasan_report+0xc6/0x100
[ 667.124950][ T9805] ? hfsplus_uni2asc+0x902/0xa10
[ 667.124959][ T9805] hfsplus_uni2asc+0x902/0xa10
[ 667.124966][ T9805] ? hfsplus_bnode_read+0x14b/0x360
[ 667.124974][ T9805] hfsplus_readdir+0x845/0xfc0
[ 667.124984][ T9805] ? __pfx_hfsplus_readdir+0x10/0x10
[ 667.124994][ T9805] ? stack_trace_save+0x8e/0xc0
[ 667.125008][ T9805] ? iterate_dir+0x18b/0xb20
[ 667.125015][ T9805] ? trace_lock_acquire+0x85/0xd0
[ 667.125022][ T9805] ? lock_acquire+0x30/0x80
[ 667.125029][ T9805] ? iterate_dir+0x18b/0xb20
[ 667.125037][ T9805] ? down_read_killable+0x1ed/0x4c0
[ 667.125044][ T9805] ? putname+0x154/0x1a0
[ 667.125051][ T9805] ? __pfx_down_read_killable+0x10/0x10
[ 667.125058][ T9805] ? apparmor_file_permission+0x239/0x3e0
[ 667.125069][ T9805] iterate_dir+0x296/0xb20
[ 667.125076][ T9805] __x64_sys_getdents64+0x13c/0x2c0
[ 667.125084][ T9805] ? __pfx___x64_sys_getdents64+0x10/0x10
[ 667.125091][ T9805] ? __x64_sys_openat+0x141/0x200
[ 667.125126][ T9805] ? __pfx_filldir64+0x10/0x10
[ 667.125134][ T9805] ? do_user_addr_fault+0x7fe/0x12f0
[ 667.125143][ T9805] do_syscall_64+0xc9/0x480
[ 667.125151][ T9805] entry_SYSCALL_64_after_hwframe+0x77/0x7f
[ 667.125158][ T9805] RIP: 0033:0x7fa8753b2fc9
[ 667.125164][ T9805] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 48
[ 667.125172][ T9805] RSP: 002b:00007ffe96f8e0f8 EFLAGS: 00000217 ORIG_RAX: 00000000000000d9
[ 667.125181][ T9805] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fa8753b2fc9
[ 667.125185][ T9805] RDX: 0000000000000400 RSI: 00002000000063c0 RDI: 0000000000000004
[ 667.125190][ T9805] RBP: 00007ffe96f8e110 R08: 00007ffe96f8e110 R09: 00007ffe96f8e110
[ 667.125195][ T9805] R10: 0000000000000000 R11: 0000000000000217 R12: 0000556b1e3b4260
[ 667.125199][ T9805] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 667.125207][ T9805] </TASK>
[ 667.125210][ T9805]
[ 667.145632][ T9805] Allocated by task 9805:
[ 667.145991][ T9805] kasan_save_stack+0x20/0x40
[ 667.146352][ T9805] kasan_save_track+0x14/0x30
[ 667.146717][ T9805] __kasan_kmalloc+0xaa/0xb0
[ 667.147065][ T9805] __kmalloc_noprof+0x205/0x550
[ 667.147448][ T9805] hfsplus_find_init+0x95/0x1f0
[ 667.147813][ T9805] hfsplus_readdir+0x220/0xfc0
[ 667.148174][ T9805] iterate_dir+0x296/0xb20
[ 667.148549][ T9805] __x64_sys_getdents64+0x13c/0x2c0
[ 667.148937][ T9805] do_syscall_64+0xc9/0x480
[ 667.149291][ T9805] entry_SYSCALL_64_after_hwframe+0x77/0x7f
[ 667.149809][ T9805]
[ 667.150030][ T9805] The buggy address belongs to the object at ffff88802592f000
[ 667.150030][ T9805] which belongs to the cache kmalloc-2k of size 2048
[ 667.151282][ T9805] The buggy address is located 0 bytes to the right of
[ 667.151282][ T9805] allocated 1036-byte region [ffff88802592f000, ffff88802592f40c)
[ 667.1
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
hfsplus: don't use BUG_ON() in hfsplus_create_attributes_file()
When the volume header contains erroneous values that do not reflect
the actual state of the filesystem, hfsplus_fill_super() assumes that
the attributes file is not yet created, which later results in hitting
BUG_ON() when hfsplus_create_attributes_file() is called. Replace this
BUG_ON() with -EIO error with a message to suggest running fsck tool. |
| In the Linux kernel, the following vulnerability has been resolved:
gfs2: Validate i_depth for exhash directories
A fuzzer test introduced corruption that ends up with a depth of 0 in
dir_e_read(), causing an undefined shift by 32 at:
index = hash >> (32 - dip->i_depth);
As calculated in an open-coded way in dir_make_exhash(), the minimum
depth for an exhash directory is ilog2(sdp->sd_hash_ptrs) and 0 is
invalid as sdp->sd_hash_ptrs is fixed as sdp->bsize / 16 at mount time.
So we can avoid the undefined behaviour by checking for depth values
lower than the minimum in gfs2_dinode_in(). Values greater than the
maximum are already being checked for there.
Also switch the calculation in dir_make_exhash() to use ilog2() to
clarify how the depth is calculated.
Tested with the syzkaller repro.c and xfstests '-g quick'. |
| In the Linux kernel, the following vulnerability has been resolved:
loop: Avoid updating block size under exclusive owner
Syzbot came up with a reproducer where a loop device block size is
changed underneath a mounted filesystem. This causes a mismatch between
the block device block size and the block size stored in the superblock
causing confusion in various places such as fs/buffer.c. The particular
issue triggered by syzbot was a warning in __getblk_slow() due to
requested buffer size not matching block device block size.
Fix the problem by getting exclusive hold of the loop device to change
its block size. This fails if somebody (such as filesystem) has already
an exclusive ownership of the block device and thus prevents modifying
the loop device under some exclusive owner which doesn't expect it. |
| In the Linux kernel, the following vulnerability has been resolved:
drbd: add missing kref_get in handle_write_conflicts
With `two-primaries` enabled, DRBD tries to detect "concurrent" writes
and handle write conflicts, so that even if you write to the same sector
simultaneously on both nodes, they end up with the identical data once
the writes are completed.
In handling "superseeded" writes, we forgot a kref_get,
resulting in a premature drbd_destroy_device and use after free,
and further to kernel crashes with symptoms.
Relevance: No one should use DRBD as a random data generator, and apparently
all users of "two-primaries" handle concurrent writes correctly on layer up.
That is cluster file systems use some distributed lock manager,
and live migration in virtualization environments stops writes on one node
before starting writes on the other node.
Which means that other than for "test cases",
this code path is never taken in real life.
FYI, in DRBD 9, things are handled differently nowadays. We still detect
"write conflicts", but no longer try to be smart about them.
We decided to disconnect hard instead: upper layers must not submit concurrent
writes. If they do, that's their fault. |