| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amdgpu: fix a job->pasid access race in gpu recovery
Avoid a possible UAF in GPU recovery due to a race between
the sched timeout callback and the tdr work queue.
The gpu recovery function calls drm_sched_stop() and
later drm_sched_start(). drm_sched_start() restarts
the tdr queue which will eventually free the job. If
the tdr queue frees the job before time out callback
completes, the job will be freed and we'll get a UAF
when accessing the pasid. Cache it early to avoid the
UAF.
Example KASAN trace:
[ 493.058141] BUG: KASAN: slab-use-after-free in amdgpu_device_gpu_recover+0x968/0x990 [amdgpu]
[ 493.067530] Read of size 4 at addr ffff88b0ce3f794c by task kworker/u128:1/323
[ 493.074892]
[ 493.076485] CPU: 9 UID: 0 PID: 323 Comm: kworker/u128:1 Tainted: G E 6.16.0-1289896.2.zuul.bf4f11df81c1410bbe901c4373305a31 #1 PREEMPT(voluntary)
[ 493.076493] Tainted: [E]=UNSIGNED_MODULE
[ 493.076495] Hardware name: TYAN B8021G88V2HR-2T/S8021GM2NR-2T, BIOS V1.03.B10 04/01/2019
[ 493.076500] Workqueue: amdgpu-reset-dev drm_sched_job_timedout [gpu_sched]
[ 493.076512] Call Trace:
[ 493.076515] <TASK>
[ 493.076518] dump_stack_lvl+0x64/0x80
[ 493.076529] print_report+0xce/0x630
[ 493.076536] ? _raw_spin_lock_irqsave+0x86/0xd0
[ 493.076541] ? __pfx__raw_spin_lock_irqsave+0x10/0x10
[ 493.076545] ? amdgpu_device_gpu_recover+0x968/0x990 [amdgpu]
[ 493.077253] kasan_report+0xb8/0xf0
[ 493.077258] ? amdgpu_device_gpu_recover+0x968/0x990 [amdgpu]
[ 493.077965] amdgpu_device_gpu_recover+0x968/0x990 [amdgpu]
[ 493.078672] ? __pfx_amdgpu_device_gpu_recover+0x10/0x10 [amdgpu]
[ 493.079378] ? amdgpu_coredump+0x1fd/0x4c0 [amdgpu]
[ 493.080111] amdgpu_job_timedout+0x642/0x1400 [amdgpu]
[ 493.080903] ? pick_task_fair+0x24e/0x330
[ 493.080910] ? __pfx_amdgpu_job_timedout+0x10/0x10 [amdgpu]
[ 493.081702] ? _raw_spin_lock+0x75/0xc0
[ 493.081708] ? __pfx__raw_spin_lock+0x10/0x10
[ 493.081712] drm_sched_job_timedout+0x1b0/0x4b0 [gpu_sched]
[ 493.081721] ? __pfx__raw_spin_lock_irq+0x10/0x10
[ 493.081725] process_one_work+0x679/0xff0
[ 493.081732] worker_thread+0x6ce/0xfd0
[ 493.081736] ? __pfx_worker_thread+0x10/0x10
[ 493.081739] kthread+0x376/0x730
[ 493.081744] ? __pfx_kthread+0x10/0x10
[ 493.081748] ? __pfx__raw_spin_lock_irq+0x10/0x10
[ 493.081751] ? __pfx_kthread+0x10/0x10
[ 493.081755] ret_from_fork+0x247/0x330
[ 493.081761] ? __pfx_kthread+0x10/0x10
[ 493.081764] ret_from_fork_asm+0x1a/0x30
[ 493.081771] </TASK>
(cherry picked from commit 20880a3fd5dd7bca1a079534cf6596bda92e107d) |
| In the Linux kernel, the following vulnerability has been resolved:
tpm2-sessions: Fix out of range indexing in name_size
'name_size' does not have any range checks, and it just directly indexes
with TPM_ALG_ID, which could lead into memory corruption at worst.
Address the issue by only processing known values and returning -EINVAL for
unrecognized values.
Make also 'tpm_buf_append_name' and 'tpm_buf_fill_hmac_session' fallible so
that errors are detected before causing any spurious TPM traffic.
End also the authorization session on failure in both of the functions, as
the session state would be then by definition corrupted. |
| In the Linux kernel, the following vulnerability has been resolved:
fuse: missing copy_finish in fuse-over-io-uring argument copies
Fix a possible reference count leak of payload pages during
fuse argument copies.
[Joanne: simplified error cleanup] |
| In the Linux kernel, the following vulnerability has been resolved:
net/mlx5: Fix double unregister of HCA_PORTS component
Clear hca_devcom_comp in device's private data after unregistering it in
LAG teardown. Otherwise a slightly lagging second pass through
mlx5_unload_one() might try to unregister it again and trip over
use-after-free.
On s390 almost all PCI level recovery events trigger two passes through
mxl5_unload_one() - one through the poll_health() method and one through
mlx5_pci_err_detected() as callback from generic PCI error recovery.
While testing PCI error recovery paths with more kernel debug features
enabled, this issue reproducibly led to kernel panics with the following
call chain:
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 6b6b6b6b6b6b6000 TEID: 6b6b6b6b6b6b6803 ESOP-2 FSI
Fault in home space mode while using kernel ASCE.
AS:00000000705c4007 R3:0000000000000024
Oops: 0038 ilc:3 [#1]SMP
CPU: 14 UID: 0 PID: 156 Comm: kmcheck Kdump: loaded Not tainted
6.18.0-20251130.rc7.git0.16131a59cab1.300.fc43.s390x+debug #1 PREEMPT
Krnl PSW : 0404e00180000000 0000020fc86aa1dc (__lock_acquire+0x5c/0x15f0)
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000000 0000020f00000001 6b6b6b6b6b6b6c33 0000000000000000
0000000000000000 0000000000000000 0000000000000001 0000000000000000
0000000000000000 0000020fca28b820 0000000000000000 0000010a1ced8100
0000010a1ced8100 0000020fc9775068 0000018fce14f8b8 0000018fce14f7f8
Krnl Code: 0000020fc86aa1cc: e3b003400004 lg %r11,832
0000020fc86aa1d2: a7840211 brc 8,0000020fc86aa5f4
*0000020fc86aa1d6: c09000df0b25 larl %r9,0000020fca28b820
>0000020fc86aa1dc: d50790002000 clc 0(8,%r9),0(%r2)
0000020fc86aa1e2: a7840209 brc 8,0000020fc86aa5f4
0000020fc86aa1e6: c0e001100401 larl %r14,0000020fca8aa9e8
0000020fc86aa1ec: c01000e25a00 larl %r1,0000020fca2f55ec
0000020fc86aa1f2: a7eb00e8 aghi %r14,232
Call Trace:
__lock_acquire+0x5c/0x15f0
lock_acquire.part.0+0xf8/0x270
lock_acquire+0xb0/0x1b0
down_write+0x5a/0x250
mlx5_detach_device+0x42/0x110 [mlx5_core]
mlx5_unload_one_devl_locked+0x50/0xc0 [mlx5_core]
mlx5_unload_one+0x42/0x60 [mlx5_core]
mlx5_pci_err_detected+0x94/0x150 [mlx5_core]
zpci_event_attempt_error_recovery+0xcc/0x388 |
| In the Linux kernel, the following vulnerability has been resolved:
hwmon: (ibmpex) fix use-after-free in high/low store
The ibmpex_high_low_store() function retrieves driver data using
dev_get_drvdata() and uses it without validation. This creates a race
condition where the sysfs callback can be invoked after the data
structure is freed, leading to use-after-free.
Fix by adding a NULL check after dev_get_drvdata(), and reordering
operations in the deletion path to prevent TOCTOU. |
| In the Linux kernel, the following vulnerability has been resolved:
fsnotify: do not generate ACCESS/MODIFY events on child for special files
inotify/fanotify do not allow users with no read access to a file to
subscribe to events (e.g. IN_ACCESS/IN_MODIFY), but they do allow the
same user to subscribe for watching events on children when the user
has access to the parent directory (e.g. /dev).
Users with no read access to a file but with read access to its parent
directory can still stat the file and see if it was accessed/modified
via atime/mtime change.
The same is not true for special files (e.g. /dev/null). Users will not
generally observe atime/mtime changes when other users read/write to
special files, only when someone sets atime/mtime via utimensat().
Align fsnotify events with this stat behavior and do not generate
ACCESS/MODIFY events to parent watchers on read/write of special files.
The events are still generated to parent watchers on utimensat(). This
closes some side-channels that could be possibly used for information
exfiltration [1].
[1] https://snee.la/pdf/pubs/file-notification-attacks.pdf |
| In the Linux kernel, the following vulnerability has been resolved:
netrom: Fix memory leak in nr_sendmsg()
syzbot reported a memory leak [1].
When function sock_alloc_send_skb() return NULL in nr_output(), the
original skb is not freed, which was allocated in nr_sendmsg(). Fix this
by freeing it before return.
[1]
BUG: memory leak
unreferenced object 0xffff888129f35500 (size 240):
comm "syz.0.17", pid 6119, jiffies 4294944652
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 10 52 28 81 88 ff ff ..........R(....
backtrace (crc 1456a3e4):
kmemleak_alloc_recursive include/linux/kmemleak.h:44 [inline]
slab_post_alloc_hook mm/slub.c:4983 [inline]
slab_alloc_node mm/slub.c:5288 [inline]
kmem_cache_alloc_node_noprof+0x36f/0x5e0 mm/slub.c:5340
__alloc_skb+0x203/0x240 net/core/skbuff.c:660
alloc_skb include/linux/skbuff.h:1383 [inline]
alloc_skb_with_frags+0x69/0x3f0 net/core/skbuff.c:6671
sock_alloc_send_pskb+0x379/0x3e0 net/core/sock.c:2965
sock_alloc_send_skb include/net/sock.h:1859 [inline]
nr_sendmsg+0x287/0x450 net/netrom/af_netrom.c:1105
sock_sendmsg_nosec net/socket.c:727 [inline]
__sock_sendmsg net/socket.c:742 [inline]
sock_write_iter+0x293/0x2a0 net/socket.c:1195
new_sync_write fs/read_write.c:593 [inline]
vfs_write+0x45d/0x710 fs/read_write.c:686
ksys_write+0x143/0x170 fs/read_write.c:738
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xa4/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: skip lock-range check on equal size to avoid size==0 underflow
When size equals the current i_size (including 0), the code used to call
check_lock_range(filp, i_size, size - 1, WRITE), which computes `size - 1`
and can underflow for size==0. Skip the equal case. |
| In the Linux kernel, the following vulnerability has been resolved:
net: openvswitch: fix middle attribute validation in push_nsh() action
The push_nsh() action structure looks like this:
OVS_ACTION_ATTR_PUSH_NSH(OVS_KEY_ATTR_NSH(OVS_NSH_KEY_ATTR_BASE,...))
The outermost OVS_ACTION_ATTR_PUSH_NSH attribute is OK'ed by the
nla_for_each_nested() inside __ovs_nla_copy_actions(). The innermost
OVS_NSH_KEY_ATTR_BASE/MD1/MD2 are OK'ed by the nla_for_each_nested()
inside nsh_key_put_from_nlattr(). But nothing checks if the attribute
in the middle is OK. We don't even check that this attribute is the
OVS_KEY_ATTR_NSH. We just do a double unwrap with a pair of nla_data()
calls - first time directly while calling validate_push_nsh() and the
second time as part of the nla_for_each_nested() macro, which isn't
safe, potentially causing invalid memory access if the size of this
attribute is incorrect. The failure may not be noticed during
validation due to larger netlink buffer, but cause trouble later during
action execution where the buffer is allocated exactly to the size:
BUG: KASAN: slab-out-of-bounds in nsh_hdr_from_nlattr+0x1dd/0x6a0 [openvswitch]
Read of size 184 at addr ffff88816459a634 by task a.out/22624
CPU: 8 UID: 0 PID: 22624 6.18.0-rc7+ #115 PREEMPT(voluntary)
Call Trace:
<TASK>
dump_stack_lvl+0x51/0x70
print_address_description.constprop.0+0x2c/0x390
kasan_report+0xdd/0x110
kasan_check_range+0x35/0x1b0
__asan_memcpy+0x20/0x60
nsh_hdr_from_nlattr+0x1dd/0x6a0 [openvswitch]
push_nsh+0x82/0x120 [openvswitch]
do_execute_actions+0x1405/0x2840 [openvswitch]
ovs_execute_actions+0xd5/0x3b0 [openvswitch]
ovs_packet_cmd_execute+0x949/0xdb0 [openvswitch]
genl_family_rcv_msg_doit+0x1d6/0x2b0
genl_family_rcv_msg+0x336/0x580
genl_rcv_msg+0x9f/0x130
netlink_rcv_skb+0x11f/0x370
genl_rcv+0x24/0x40
netlink_unicast+0x73e/0xaa0
netlink_sendmsg+0x744/0xbf0
__sys_sendto+0x3d6/0x450
do_syscall_64+0x79/0x2c0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
</TASK>
Let's add some checks that the attribute is properly sized and it's
the only one attribute inside the action. Technically, there is no
real reason for OVS_KEY_ATTR_NSH to be there, as we know that we're
pushing an NSH header already, it just creates extra nesting, but
that's how uAPI works today. So, keeping as it is. |
| In the Linux kernel, the following vulnerability has been resolved:
xfs: fix a UAF problem in xattr repair
The xchk_setup_xattr_buf function can allocate a new value buffer, which
means that any reference to ab->value before the call could become a
dangling pointer. Fix this by moving an assignment to after the buffer
setup. |
| In the Linux kernel, the following vulnerability has been resolved:
ALSA: usb-mixer: us16x08: validate meter packet indices
get_meter_levels_from_urb() parses the 64-byte meter packets sent by
the device and fills the per-channel arrays meter_level[],
comp_level[] and master_level[] in struct snd_us16x08_meter_store.
Currently the function derives the channel index directly from the
meter packet (MUB2(meter_urb, s) - 1) and uses it to index those
arrays without validating the range. If the packet contains a
negative or out-of-range channel number, the driver may write past
the end of these arrays.
Introduce a local channel variable and validate it before updating the
arrays. We reject negative indices, limit meter_level[] and
comp_level[] to SND_US16X08_MAX_CHANNELS, and guard master_level[]
updates with ARRAY_SIZE(master_level). |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: target: Reset t_task_cdb pointer in error case
If allocation of cmd->t_task_cdb fails, it remains NULL but is later
dereferenced in the 'err' path.
In case of error, reset NULL t_task_cdb value to point at the default
fixed-size buffer.
Found by Linux Verification Center (linuxtesting.org) with SVACE. |
| In the Linux kernel, the following vulnerability has been resolved:
usb: phy: fsl-usb: Fix use-after-free in delayed work during device removal
The delayed work item otg_event is initialized in fsl_otg_conf() and
scheduled under two conditions:
1. When a host controller binds to the OTG controller.
2. When the USB ID pin state changes (cable insertion/removal).
A race condition occurs when the device is removed via fsl_otg_remove():
the fsl_otg instance may be freed while the delayed work is still pending
or executing. This leads to use-after-free when the work function
fsl_otg_event() accesses the already freed memory.
The problematic scenario:
(detach thread) | (delayed work)
fsl_otg_remove() |
kfree(fsl_otg_dev) //FREE| fsl_otg_event()
| og = container_of(...) //USE
| og-> //USE
Fix this by calling disable_delayed_work_sync() in fsl_otg_remove()
before deallocating the fsl_otg structure. This ensures the delayed work
is properly canceled and completes execution prior to memory deallocation.
This bug was identified through static analysis. |
| In the Linux kernel, the following vulnerability has been resolved:
sched/deadline: only set free_cpus for online runqueues
Commit 16b269436b72 ("sched/deadline: Modify cpudl::free_cpus
to reflect rd->online") introduced the cpudl_set/clear_freecpu
functions to allow the cpu_dl::free_cpus mask to be manipulated
by the deadline scheduler class rq_on/offline callbacks so the
mask would also reflect this state.
Commit 9659e1eeee28 ("sched/deadline: Remove cpu_active_mask
from cpudl_find()") removed the check of the cpu_active_mask to
save some processing on the premise that the cpudl::free_cpus
mask already reflected the runqueue online state.
Unfortunately, there are cases where it is possible for the
cpudl_clear function to set the free_cpus bit for a CPU when the
deadline runqueue is offline. When this occurs while a CPU is
connected to the default root domain the flag may retain the bad
state after the CPU has been unplugged. Later, a different CPU
that is transitioning through the default root domain may push a
deadline task to the powered down CPU when cpudl_find sees its
free_cpus bit is set. If this happens the task will not have the
opportunity to run.
One example is outlined here:
https://lore.kernel.org/lkml/20250110233010.2339521-1-opendmb@gmail.com
Another occurs when the last deadline task is migrated from a
CPU that has an offlined runqueue. The dequeue_task member of
the deadline scheduler class will eventually call cpudl_clear
and set the free_cpus bit for the CPU.
This commit modifies the cpudl_clear function to be aware of the
online state of the deadline runqueue so that the free_cpus mask
can be updated appropriately.
It is no longer necessary to manage the mask outside of the
cpudl_set/clear functions so the cpudl_set/clear_freecpu
functions are removed. In addition, since the free_cpus mask is
now only updated under the cpudl lock the code was changed to
use the non-atomic __cpumask functions. |
| In the Linux kernel, the following vulnerability has been resolved:
net/mlx5e: Avoid unregistering PSP twice
PSP is unregistered twice in:
_mlx5e_remove -> mlx5e_psp_unregister
mlx5e_nic_cleanup -> mlx5e_psp_unregister
This leads to a refcount underflow in some conditions:
------------[ cut here ]------------
refcount_t: underflow; use-after-free.
WARNING: CPU: 2 PID: 1694 at lib/refcount.c:28 refcount_warn_saturate+0xd8/0xe0
[...]
mlx5e_psp_unregister+0x26/0x50 [mlx5_core]
mlx5e_nic_cleanup+0x26/0x90 [mlx5_core]
mlx5e_remove+0xe6/0x1f0 [mlx5_core]
auxiliary_bus_remove+0x18/0x30
device_release_driver_internal+0x194/0x1f0
bus_remove_device+0xc6/0x130
device_del+0x159/0x3c0
mlx5_rescan_drivers_locked+0xbc/0x2a0 [mlx5_core]
[...]
Do not directly remove psp from the _mlx5e_remove path, the PSP cleanup
happens as part of profile cleanup. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: don't log conflicting inode if it's a dir moved in the current transaction
We can't log a conflicting inode if it's a directory and it was moved
from one parent directory to another parent directory in the current
transaction, as this can result an attempt to have a directory with
two hard links during log replay, one for the old parent directory and
another for the new parent directory.
The following scenario triggers that issue:
1) We have directories "dir1" and "dir2" created in a past transaction.
Directory "dir1" has inode A as its parent directory;
2) We move "dir1" to some other directory;
3) We create a file with the name "dir1" in directory inode A;
4) We fsync the new file. This results in logging the inode of the new file
and the inode for the directory "dir1" that was previously moved in the
current transaction. So the log tree has the INODE_REF item for the
new location of "dir1";
5) We move the new file to some other directory. This results in updating
the log tree to included the new INODE_REF for the new location of the
file and removes the INODE_REF for the old location. This happens
during the rename when we call btrfs_log_new_name();
6) We fsync the file, and that persists the log tree changes done in the
previous step (btrfs_log_new_name() only updates the log tree in
memory);
7) We have a power failure;
8) Next time the fs is mounted, log replay happens and when processing
the inode for directory "dir1" we find a new INODE_REF and add that
link, but we don't remove the old link of the inode since we have
not logged the old parent directory of the directory inode "dir1".
As a result after log replay finishes when we trigger writeback of the
subvolume tree's extent buffers, the tree check will detect that we have
a directory a hard link count of 2 and we get a mount failure.
The errors and stack traces reported in dmesg/syslog are like this:
[ 3845.729764] BTRFS info (device dm-0): start tree-log replay
[ 3845.730304] page: refcount:3 mapcount:0 mapping:000000005c8a3027 index:0x1d00 pfn:0x11510c
[ 3845.731236] memcg:ffff9264c02f4e00
[ 3845.731751] aops:btree_aops [btrfs] ino:1
[ 3845.732300] flags: 0x17fffc00000400a(uptodate|private|writeback|node=0|zone=2|lastcpupid=0x1ffff)
[ 3845.733346] raw: 017fffc00000400a 0000000000000000 dead000000000122 ffff9264d978aea8
[ 3845.734265] raw: 0000000000001d00 ffff92650e6d4738 00000003ffffffff ffff9264c02f4e00
[ 3845.735305] page dumped because: eb page dump
[ 3845.735981] BTRFS critical (device dm-0): corrupt leaf: root=5 block=30408704 slot=6 ino=257, invalid nlink: has 2 expect no more than 1 for dir
[ 3845.737786] BTRFS info (device dm-0): leaf 30408704 gen 10 total ptrs 17 free space 14881 owner 5
[ 3845.737789] BTRFS info (device dm-0): refs 4 lock_owner 0 current 30701
[ 3845.737792] item 0 key (256 INODE_ITEM 0) itemoff 16123 itemsize 160
[ 3845.737794] inode generation 3 transid 9 size 16 nbytes 16384
[ 3845.737795] block group 0 mode 40755 links 1 uid 0 gid 0
[ 3845.737797] rdev 0 sequence 2 flags 0x0
[ 3845.737798] atime 1764259517.0
[ 3845.737800] ctime 1764259517.572889464
[ 3845.737801] mtime 1764259517.572889464
[ 3845.737802] otime 1764259517.0
[ 3845.737803] item 1 key (256 INODE_REF 256) itemoff 16111 itemsize 12
[ 3845.737805] index 0 name_len 2
[ 3845.737807] item 2 key (256 DIR_ITEM 2363071922) itemoff 16077 itemsize 34
[ 3845.737808] location key (257 1 0) type 2
[ 3845.737810] transid 9 data_len 0 name_len 4
[ 3845.737811] item 3 key (256 DIR_ITEM 2676584006) itemoff 16043 itemsize 34
[ 3845.737813] location key (258 1 0) type 2
[ 3845.737814] transid 9 data_len 0 name_len 4
[ 3845.737815] item 4 key (256 DIR_INDEX 2) itemoff 16009 itemsize 34
[ 3845.737816] location key (257 1 0) type 2
[
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
Input: ti_am335x_tsc - fix off-by-one error in wire_order validation
The current validation 'wire_order[i] > ARRAY_SIZE(config_pins)' allows
wire_order[i] to equal ARRAY_SIZE(config_pins), which causes out-of-bounds
access when used as index in 'config_pins[wire_order[i]]'.
Since config_pins has 4 elements (indices 0-3), the valid range for
wire_order should be 0-3. Fix the off-by-one error by using >= instead
of > in the validation check. |
| In the Linux kernel, the following vulnerability has been resolved:
net/hsr: fix NULL pointer dereference in prp_get_untagged_frame()
prp_get_untagged_frame() calls __pskb_copy() to create frame->skb_std
but doesn't check if the allocation failed. If __pskb_copy() returns
NULL, skb_clone() is called with a NULL pointer, causing a crash:
Oops: general protection fault, probably for non-canonical address 0xdffffc000000000f: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000078-0x000000000000007f]
CPU: 0 UID: 0 PID: 5625 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
RIP: 0010:skb_clone+0xd7/0x3a0 net/core/skbuff.c:2041
Code: 03 42 80 3c 20 00 74 08 4c 89 f7 e8 23 29 05 f9 49 83 3e 00 0f 85 a0 01 00 00 e8 94 dd 9d f8 48 8d 6b 7e 49 89 ee 49 c1 ee 03 <43> 0f b6 04 26 84 c0 0f 85 d1 01 00 00 44 0f b6 7d 00 41 83 e7 0c
RSP: 0018:ffffc9000d00f200 EFLAGS: 00010207
RAX: ffffffff892235a1 RBX: 0000000000000000 RCX: ffff88803372a480
RDX: 0000000000000000 RSI: 0000000000000820 RDI: 0000000000000000
RBP: 000000000000007e R08: ffffffff8f7d0f77 R09: 1ffffffff1efa1ee
R10: dffffc0000000000 R11: fffffbfff1efa1ef R12: dffffc0000000000
R13: 0000000000000820 R14: 000000000000000f R15: ffff88805144cc00
FS: 0000555557f6d500(0000) GS:ffff88808d72f000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000555581d35808 CR3: 000000005040e000 CR4: 0000000000352ef0
Call Trace:
<TASK>
hsr_forward_do net/hsr/hsr_forward.c:-1 [inline]
hsr_forward_skb+0x1013/0x2860 net/hsr/hsr_forward.c:741
hsr_handle_frame+0x6ce/0xa70 net/hsr/hsr_slave.c:84
__netif_receive_skb_core+0x10b9/0x4380 net/core/dev.c:5966
__netif_receive_skb_one_core net/core/dev.c:6077 [inline]
__netif_receive_skb+0x72/0x380 net/core/dev.c:6192
netif_receive_skb_internal net/core/dev.c:6278 [inline]
netif_receive_skb+0x1cb/0x790 net/core/dev.c:6337
tun_rx_batched+0x1b9/0x730 drivers/net/tun.c:1485
tun_get_user+0x2b65/0x3e90 drivers/net/tun.c:1953
tun_chr_write_iter+0x113/0x200 drivers/net/tun.c:1999
new_sync_write fs/read_write.c:593 [inline]
vfs_write+0x5c9/0xb30 fs/read_write.c:686
ksys_write+0x145/0x250 fs/read_write.c:738
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f0449f8e1ff
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 f9 92 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 4c 93 02 00 48
RSP: 002b:00007ffd7ad94c90 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f044a1e5fa0 RCX: 00007f0449f8e1ff
RDX: 000000000000003e RSI: 0000200000000500 RDI: 00000000000000c8
RBP: 00007ffd7ad94d20 R08: 0000000000000000 R09: 0000000000000000
R10: 000000000000003e R11: 0000000000000293 R12: 0000000000000001
R13: 00007f044a1e5fa0 R14: 00007f044a1e5fa0 R15: 0000000000000003
</TASK>
Add a NULL check immediately after __pskb_copy() to handle allocation
failures gracefully. |
| In the Linux kernel, the following vulnerability has been resolved:
net/handshake: duplicate handshake cancellations leak socket
When a handshake request is cancelled it is removed from the
handshake_net->hn_requests list, but it is still present in the
handshake_rhashtbl until it is destroyed.
If a second cancellation request arrives for the same handshake request,
then remove_pending() will return false... and assuming
HANDSHAKE_F_REQ_COMPLETED isn't set in req->hr_flags, we'll continue
processing through the out_true label, where we put another reference on
the sock and a refcount underflow occurs.
This can happen for example if a handshake times out - particularly if
the SUNRPC client sends the AUTH_TLS probe to the server but doesn't
follow it up with the ClientHello due to a problem with tlshd. When the
timeout is hit on the server, the server will send a FIN, which triggers
a cancellation request via xs_reset_transport(). When the timeout is
hit on the client, another cancellation request happens via
xs_tls_handshake_sync().
Add a test_and_set_bit(HANDSHAKE_F_REQ_COMPLETED) in the pending cancel
path so duplicate cancels can be detected. |
| In the Linux kernel, the following vulnerability has been resolved:
hfsplus: fix missing hfs_bnode_get() in __hfs_bnode_create
When sync() and link() are called concurrently, both threads may
enter hfs_bnode_find() without finding the node in the hash table
and proceed to create it.
Thread A:
hfsplus_write_inode()
-> hfsplus_write_system_inode()
-> hfs_btree_write()
-> hfs_bnode_find(tree, 0)
-> __hfs_bnode_create(tree, 0)
Thread B:
hfsplus_create_cat()
-> hfs_brec_insert()
-> hfs_bnode_split()
-> hfs_bmap_alloc()
-> hfs_bnode_find(tree, 0)
-> __hfs_bnode_create(tree, 0)
In this case, thread A creates the bnode, sets refcnt=1, and hashes it.
Thread B also tries to create the same bnode, notices it has already
been inserted, drops its own instance, and uses the hashed one without
getting the node.
```
node2 = hfs_bnode_findhash(tree, cnid);
if (!node2) { <- Thread A
hash = hfs_bnode_hash(cnid);
node->next_hash = tree->node_hash[hash];
tree->node_hash[hash] = node;
tree->node_hash_cnt++;
} else { <- Thread B
spin_unlock(&tree->hash_lock);
kfree(node);
wait_event(node2->lock_wq,
!test_bit(HFS_BNODE_NEW, &node2->flags));
return node2;
}
```
However, hfs_bnode_find() requires each call to take a reference.
Here both threads end up setting refcnt=1. When they later put the node,
this triggers:
BUG_ON(!atomic_read(&node->refcnt))
In this scenario, Thread B in fact finds the node in the hash table
rather than creating a new one, and thus must take a reference.
Fix this by calling hfs_bnode_get() when reusing a bnode newly created by
another thread to ensure the refcount is updated correctly.
A similar bug was fixed in HFS long ago in commit
a9dc087fd3c4 ("fix missing hfs_bnode_get() in __hfs_bnode_create")
but the same issue remained in HFS+ until now. |