| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/hfi1: fix potential memory leak in setup_base_ctxt()
setup_base_ctxt() allocates a memory chunk for uctxt->groups with
hfi1_alloc_ctxt_rcv_groups(). When init_user_ctxt() fails, uctxt->groups
is not released, which will lead to a memory leak.
We should release the uctxt->groups with hfi1_free_ctxt_rcv_groups()
when init_user_ctxt() fails. |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/rxe: Fix BUG: KASAN: null-ptr-deref in rxe_qp_do_cleanup
The function rxe_create_qp calls rxe_qp_from_init. If some error
occurs, the error handler of function rxe_qp_from_init will set
both scq and rcq to NULL.
Then rxe_create_qp calls rxe_put to handle qp. In the end,
rxe_qp_do_cleanup is called by rxe_put. rxe_qp_do_cleanup directly
accesses scq and rcq before checking them. This will cause
null-ptr-deref error.
The call graph is as below:
rxe_create_qp {
...
rxe_qp_from_init {
...
err1:
...
qp->rcq = NULL; <---rcq is set to NULL
qp->scq = NULL; <---scq is set to NULL
...
}
qp_init:
rxe_put{
...
rxe_qp_do_cleanup {
...
atomic_dec(&qp->scq->num_wq); <--- scq is accessed
...
atomic_dec(&qp->rcq->num_wq); <--- rcq is accessed
}
} |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/siw: Fix duplicated reported IW_CM_EVENT_CONNECT_REPLY event
If siw_recv_mpa_rr returns -EAGAIN, it means that the MPA reply hasn't
been received completely, and should not report IW_CM_EVENT_CONNECT_REPLY
in this case. This may trigger a call trace in iw_cm. A simple way to
trigger this:
server: ib_send_lat
client: ib_send_lat -R <server_ip>
The call trace looks like this:
kernel BUG at drivers/infiniband/core/iwcm.c:894!
invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
<...>
Workqueue: iw_cm_wq cm_work_handler [iw_cm]
Call Trace:
<TASK>
cm_work_handler+0x1dd/0x370 [iw_cm]
process_one_work+0x1e2/0x3b0
worker_thread+0x49/0x2e0
? rescuer_thread+0x370/0x370
kthread+0xe5/0x110
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x1f/0x30
</TASK> |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/irdma: Fix a window for use-after-free
During a destroy CQ an interrupt may cause processing of a CQE after CQ
resources are freed by irdma_cq_free_rsrc(). Fix this by moving the call
to irdma_cq_free_rsrc() after the irdma_sc_cleanup_ceqes(), which is
called under the cq_lock. |
| In the Linux kernel, the following vulnerability has been resolved:
mm: fix a UAF when vma->mm is freed after vma->vm_refcnt got dropped
By inducing delays in the right places, Jann Horn created a reproducer for
a hard to hit UAF issue that became possible after VMAs were allowed to be
recycled by adding SLAB_TYPESAFE_BY_RCU to their cache.
Race description is borrowed from Jann's discovery report:
lock_vma_under_rcu() looks up a VMA locklessly with mas_walk() under
rcu_read_lock(). At that point, the VMA may be concurrently freed, and it
can be recycled by another process. vma_start_read() then increments the
vma->vm_refcnt (if it is in an acceptable range), and if this succeeds,
vma_start_read() can return a recycled VMA.
In this scenario where the VMA has been recycled, lock_vma_under_rcu()
will then detect the mismatching ->vm_mm pointer and drop the VMA through
vma_end_read(), which calls vma_refcount_put(). vma_refcount_put() drops
the refcount and then calls rcuwait_wake_up() using a copy of vma->vm_mm.
This is wrong: It implicitly assumes that the caller is keeping the VMA's
mm alive, but in this scenario the caller has no relation to the VMA's mm,
so the rcuwait_wake_up() can cause UAF.
The diagram depicting the race:
T1 T2 T3
== == ==
lock_vma_under_rcu
mas_walk
<VMA gets removed from mm>
mmap
<the same VMA is reallocated>
vma_start_read
__refcount_inc_not_zero_limited_acquire
munmap
__vma_enter_locked
refcount_add_not_zero
vma_end_read
vma_refcount_put
__refcount_dec_and_test
rcuwait_wait_event
<finish operation>
rcuwait_wake_up [UAF]
Note that rcuwait_wait_event() in T3 does not block because refcount was
already dropped by T1. At this point T3 can exit and free the mm causing
UAF in T1.
To avoid this we move vma->vm_mm verification into vma_start_read() and
grab vma->vm_mm to stabilize it before vma_refcount_put() operation.
[surenb@google.com: v3] |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: validate command request size
In commit 2b9b8f3b68ed ("ksmbd: validate command payload size"), except
for SMB2_OPLOCK_BREAK_HE command, the request size of other commands
is not checked, it's not expected. Fix it by add check for request
size of other commands. |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix wrong next length validation of ea buffer in smb2_set_ea()
There are multiple smb2_ea_info buffers in FILE_FULL_EA_INFORMATION request
from client. ksmbd find next smb2_ea_info using ->NextEntryOffset of
current smb2_ea_info. ksmbd need to validate buffer length Before
accessing the next ea. ksmbd should check buffer length using buf_len,
not next variable. next is the start offset of current ea that got from
previous ea. |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix out of bounds read in smb2_sess_setup
ksmbd does not consider the case of that smb2 session setup is
in compound request. If this is the second payload of the compound,
OOB read issue occurs while processing the first payload in
the smb2_sess_setup(). |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: validate session id and tree id in the compound request
This patch validate session id and tree id in compound request.
If first operation in the compound is SMB2 ECHO request, ksmbd bypass
session and tree validation. So work->sess and work->tcon could be NULL.
If secound request in the compound access work->sess or tcon, It cause
NULL pointer dereferecing error. |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix out-of-bound read in smb2_write
ksmbd_smb2_check_message doesn't validate hdr->NextCommand. If
->NextCommand is bigger than Offset + Length of smb2 write, It will
allow oversized smb2 write length. It will cause OOB read in smb2_write. |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: not allow guest user on multichannel
This patch return STATUS_NOT_SUPPORTED if binding session is guest. |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring/rsrc: validate buffer count with offset for cloning
syzbot reports that it can trigger a WARN_ON() for kmalloc() attempt
that's too big:
WARNING: CPU: 0 PID: 6488 at mm/slub.c:5024 __kvmalloc_node_noprof+0x520/0x640 mm/slub.c:5024
Modules linked in:
CPU: 0 UID: 0 PID: 6488 Comm: syz-executor312 Not tainted 6.15.0-rc7-syzkaller-gd7fa1af5b33e #0 PREEMPT
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : __kvmalloc_node_noprof+0x520/0x640 mm/slub.c:5024
lr : __do_kmalloc_node mm/slub.c:-1 [inline]
lr : __kvmalloc_node_noprof+0x3b4/0x640 mm/slub.c:5012
sp : ffff80009cfd7a90
x29: ffff80009cfd7ac0 x28: ffff0000dd52a120 x27: 0000000000412dc0
x26: 0000000000000178 x25: ffff7000139faf70 x24: 0000000000000000
x23: ffff800082f4cea8 x22: 00000000ffffffff x21: 000000010cd004a8
x20: ffff0000d75816c0 x19: ffff0000dd52a000 x18: 00000000ffffffff
x17: ffff800092f39000 x16: ffff80008adbe9e4 x15: 0000000000000005
x14: 1ffff000139faf1c x13: 0000000000000000 x12: 0000000000000000
x11: ffff7000139faf21 x10: 0000000000000003 x9 : ffff80008f27b938
x8 : 0000000000000002 x7 : 0000000000000000 x6 : 0000000000000000
x5 : 00000000ffffffff x4 : 0000000000400dc0 x3 : 0000000200000000
x2 : 000000010cd004a8 x1 : ffff80008b3ebc40 x0 : 0000000000000001
Call trace:
__kvmalloc_node_noprof+0x520/0x640 mm/slub.c:5024 (P)
kvmalloc_array_node_noprof include/linux/slab.h:1065 [inline]
io_rsrc_data_alloc io_uring/rsrc.c:206 [inline]
io_clone_buffers io_uring/rsrc.c:1178 [inline]
io_register_clone_buffers+0x484/0xa14 io_uring/rsrc.c:1287
__io_uring_register io_uring/register.c:815 [inline]
__do_sys_io_uring_register io_uring/register.c:926 [inline]
__se_sys_io_uring_register io_uring/register.c:903 [inline]
__arm64_sys_io_uring_register+0x42c/0xea8 io_uring/register.c:903
__invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
el0_svc+0x58/0x17c arch/arm64/kernel/entry-common.c:767
el0t_64_sync_handler+0x78/0x108 arch/arm64/kernel/entry-common.c:786
el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600
which is due to offset + buffer_count being too large. The registration
code checks only the total count of buffers, but given that the indexing
is an array, it should also check offset + count. That can't exceed
IORING_MAX_REG_BUFFERS either, as there's no way to reach buffers beyond
that limit.
There's no issue with registrering a table this large, outside of the
fact that it's pointless to register buffers that cannot be reached, and
that it can trigger this kmalloc() warning for attempting an allocation
that is too large. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: ath12k: Fix memory leak due to multiple rx_stats allocation
rx_stats for each arsta is allocated when adding a station.
arsta->rx_stats will be freed when a station is removed.
Redundant allocations are occurring when the same station is added
multiple times. This causes ath12k_mac_station_add() to be called
multiple times, and rx_stats is allocated each time. As a result there
is memory leaks.
Prevent multiple allocations of rx_stats when ath12k_mac_station_add()
is called repeatedly by checking if rx_stats is already allocated
before allocating again. Allocate arsta->rx_stats if arsta->rx_stats
is NULL respectively.
Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.3.1-00173-QCAHKSWPL_SILICONZ-1
Tested-on: WCN7850 hw2.0 PCI WLAN.HMT.1.0.c5-00481-QCAHMTSWPL_V1.0_V2.0_SILICONZ-3 |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nft_set_pipapo: clamp maximum map bucket size to INT_MAX
Otherwise, it is possible to hit WARN_ON_ONCE in __kvmalloc_node_noprof()
when resizing hashtable because __GFP_NOWARN is unset.
Similar to:
b541ba7d1f5a ("netfilter: conntrack: clamp maximum hashtable size to INT_MAX") |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amd/display: Avoid divide by zero by initializing dummy pitch to 1
[Why]
If the dummy values in `populate_dummy_dml_surface_cfg()` aren't updated
then they can lead to a divide by zero in downstream callers like
CalculateVMAndRowBytes()
[How]
Initialize dummy value to a value to avoid divide by zero. |
| In the Linux kernel, the following vulnerability has been resolved:
mm: fix uprobe pte be overwritten when expanding vma
Patch series "Fix uprobe pte be overwritten when expanding vma".
This patch (of 4):
We encountered a BUG alert triggered by Syzkaller as follows:
BUG: Bad rss-counter state mm:00000000b4a60fca type:MM_ANONPAGES val:1
And we can reproduce it with the following steps:
1. register uprobe on file at zero offset
2. mmap the file at zero offset:
addr1 = mmap(NULL, 2 * 4096, PROT_NONE, MAP_PRIVATE, fd, 0);
3. mremap part of vma1 to new vma2:
addr2 = mremap(addr1, 4096, 2 * 4096, MREMAP_MAYMOVE);
4. mremap back to orig addr1:
mremap(addr2, 4096, 4096, MREMAP_MAYMOVE | MREMAP_FIXED, addr1);
In step 3, the vma1 range [addr1, addr1 + 4096] will be remap to new vma2
with range [addr2, addr2 + 8192], and remap uprobe anon page from the vma1
to vma2, then unmap the vma1 range [addr1, addr1 + 4096].
In step 4, the vma2 range [addr2, addr2 + 4096] will be remap back to the
addr range [addr1, addr1 + 4096]. Since the addr range [addr1 + 4096,
addr1 + 8192] still maps the file, it will take vma_merge_new_range to
expand the range, and then do uprobe_mmap in vma_complete. Since the
merged vma pgoff is also zero offset, it will install uprobe anon page to
the merged vma. However, the upcomming move_page_tables step, which use
set_pte_at to remap the vma2 uprobe pte to the merged vma, will overwrite
the newly uprobe pte in the merged vma, and lead that pte to be orphan.
Since the uprobe pte will be remapped to the merged vma, we can remove the
unnecessary uprobe_mmap upon merged vma.
This problem was first found in linux-6.6.y and also exists in the
community syzkaller:
https://lore.kernel.org/all/000000000000ada39605a5e71711@google.com/T/ |
| A use-after-free vulnerability was discovered in Adobe Flash Player before 28.0.0.161. This vulnerability occurs due to a dangling pointer in the Primetime SDK related to media player handling of listener objects. A successful attack can lead to arbitrary code execution. This was exploited in the wild in January and February 2018. |
| Adobe Flash Player versions 29.0.0.171 and earlier have a Stack-based buffer overflow vulnerability. Successful exploitation could lead to arbitrary code execution in the context of the current user. |
| In the Linux kernel, the following vulnerability has been resolved:
nvme-tcp: remove tag set when second admin queue config fails
Commit 104d0e2f6222 ("nvme-fabrics: reset admin connection for secure
concatenation") modified nvme_tcp_setup_ctrl() to call
nvme_tcp_configure_admin_queue() twice. The first call prepares for
DH-CHAP negotitation, and the second call is required for secure
concatenation. However, this change triggered BUG KASAN slab-use-after-
free in blk_mq_queue_tag_busy_iter(). This BUG can be recreated by
repeating the blktests test case nvme/063 a few times [1].
When the BUG happens, nvme_tcp_create_ctrl() fails in the call chain
below:
nvme_tcp_create_ctrl()
nvme_tcp_alloc_ctrl() new=true ... Alloc nvme_tcp_ctrl and admin_tag_set
nvme_tcp_setup_ctrl() new=true
nvme_tcp_configure_admin_queue() new=true ... Succeed
nvme_alloc_admin_tag_set() ... Alloc the tag set for admin_tag_set
nvme_stop_keep_alive()
nvme_tcp_teardown_admin_queue() remove=false
nvme_tcp_configure_admin_queue() new=false
nvme_tcp_alloc_admin_queue() ... Fail, but do not call nvme_remove_admin_tag_set()
nvme_uninit_ctrl()
nvme_put_ctrl() ... Free up the nvme_tcp_ctrl and admin_tag_set
The first call of nvme_tcp_configure_admin_queue() succeeds with
new=true argument. The second call fails with new=false argument. This
second call does not call nvme_remove_admin_tag_set() on failure, due to
the new=false argument. Then the admin tag set is not removed. However,
nvme_tcp_create_ctrl() assumes that nvme_tcp_setup_ctrl() would call
nvme_remove_admin_tag_set(). Then it frees up struct nvme_tcp_ctrl which
has admin_tag_set field. Later on, the timeout handler accesses the
admin_tag_set field and causes the BUG KASAN slab-use-after-free.
To not leave the admin tag set, call nvme_remove_admin_tag_set() when
the second nvme_tcp_configure_admin_queue() call fails. Do not return
from nvme_tcp_setup_ctrl() on failure. Instead, jump to "destroy_admin"
go-to label to call nvme_tcp_teardown_admin_queue() which calls
nvme_remove_admin_tag_set(). |
| In the Linux kernel, the following vulnerability has been resolved:
can: kvaser_pciefd: refine error prone echo_skb_max handling logic
echo_skb_max should define the supported upper limit of echo_skb[]
allocated inside the netdevice's priv. The corresponding size value
provided by this driver to alloc_candev() is KVASER_PCIEFD_CAN_TX_MAX_COUNT
which is 17.
But later echo_skb_max is rounded up to the nearest power of two (for the
max case, that would be 32) and the tx/ack indices calculated further
during tx/rx may exceed the upper array boundary. Kasan reported this for
the ack case inside kvaser_pciefd_handle_ack_packet(), though the xmit
function has actually caught the same thing earlier.
BUG: KASAN: slab-out-of-bounds in kvaser_pciefd_handle_ack_packet+0x2d7/0x92a drivers/net/can/kvaser_pciefd.c:1528
Read of size 8 at addr ffff888105e4f078 by task swapper/4/0
CPU: 4 UID: 0 PID: 0 Comm: swapper/4 Not tainted 6.15.0 #12 PREEMPT(voluntary)
Call Trace:
<IRQ>
dump_stack_lvl lib/dump_stack.c:122
print_report mm/kasan/report.c:521
kasan_report mm/kasan/report.c:634
kvaser_pciefd_handle_ack_packet drivers/net/can/kvaser_pciefd.c:1528
kvaser_pciefd_read_packet drivers/net/can/kvaser_pciefd.c:1605
kvaser_pciefd_read_buffer drivers/net/can/kvaser_pciefd.c:1656
kvaser_pciefd_receive_irq drivers/net/can/kvaser_pciefd.c:1684
kvaser_pciefd_irq_handler drivers/net/can/kvaser_pciefd.c:1733
__handle_irq_event_percpu kernel/irq/handle.c:158
handle_irq_event kernel/irq/handle.c:210
handle_edge_irq kernel/irq/chip.c:833
__common_interrupt arch/x86/kernel/irq.c:296
common_interrupt arch/x86/kernel/irq.c:286
</IRQ>
Tx max count definitely matters for kvaser_pciefd_tx_avail(), but for seq
numbers' generation that's not the case - we're free to calculate them as
would be more convenient, not taking tx max count into account. The only
downside is that the size of echo_skb[] should correspond to the max seq
number (not tx max count), so in some situations a bit more memory would
be consumed than could be.
Thus make the size of the underlying echo_skb[] sufficient for the rounded
max tx value.
Found by Linux Verification Center (linuxtesting.org) with Syzkaller. |