| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
fs/ntfs3: Validate ff offset
This adds sanity checks for ff offset. There is a check
on rt->first_free at first, but walking through by ff
without any check. If the second ff is a large offset.
We may encounter an out-of-bound read. |
| In the Linux kernel, the following vulnerability has been resolved:
filelock: Remove locks reliably when fcntl/close race is detected
When fcntl_setlk() races with close(), it removes the created lock with
do_lock_file_wait().
However, LSMs can allow the first do_lock_file_wait() that created the lock
while denying the second do_lock_file_wait() that tries to remove the lock.
Separately, posix_lock_file() could also fail to
remove a lock due to GFP_KERNEL allocation failure (when splitting a range
in the middle).
After the bug has been triggered, use-after-free reads will occur in
lock_get_status() when userspace reads /proc/locks. This can likely be used
to read arbitrary kernel memory, but can't corrupt kernel memory.
Fix it by calling locks_remove_posix() instead, which is designed to
reliably get rid of POSIX locks associated with the given file and
files_struct and is also used by filp_flush(). |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amdkfd: don't allow mapping the MMIO HDP page with large pages
We don't get the right offset in that case. The GPU has
an unused 4K area of the register BAR space into which you can
remap registers. We remap the HDP flush registers into this
space to allow userspace (CPU or GPU) to flush the HDP when it
updates VRAM. However, on systems with >4K pages, we end up
exposing PAGE_SIZE of MMIO space. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Fix overrunning reservations in ringbuf
The BPF ring buffer internally is implemented as a power-of-2 sized circular
buffer, with two logical and ever-increasing counters: consumer_pos is the
consumer counter to show which logical position the consumer consumed the
data, and producer_pos which is the producer counter denoting the amount of
data reserved by all producers.
Each time a record is reserved, the producer that "owns" the record will
successfully advance producer counter. In user space each time a record is
read, the consumer of the data advanced the consumer counter once it finished
processing. Both counters are stored in separate pages so that from user
space, the producer counter is read-only and the consumer counter is read-write.
One aspect that simplifies and thus speeds up the implementation of both
producers and consumers is how the data area is mapped twice contiguously
back-to-back in the virtual memory, allowing to not take any special measures
for samples that have to wrap around at the end of the circular buffer data
area, because the next page after the last data page would be first data page
again, and thus the sample will still appear completely contiguous in virtual
memory.
Each record has a struct bpf_ringbuf_hdr { u32 len; u32 pg_off; } header for
book-keeping the length and offset, and is inaccessible to the BPF program.
Helpers like bpf_ringbuf_reserve() return `(void *)hdr + BPF_RINGBUF_HDR_SZ`
for the BPF program to use. Bing-Jhong and Muhammad reported that it is however
possible to make a second allocated memory chunk overlapping with the first
chunk and as a result, the BPF program is now able to edit first chunk's
header.
For example, consider the creation of a BPF_MAP_TYPE_RINGBUF map with size
of 0x4000. Next, the consumer_pos is modified to 0x3000 /before/ a call to
bpf_ringbuf_reserve() is made. This will allocate a chunk A, which is in
[0x0,0x3008], and the BPF program is able to edit [0x8,0x3008]. Now, lets
allocate a chunk B with size 0x3000. This will succeed because consumer_pos
was edited ahead of time to pass the `new_prod_pos - cons_pos > rb->mask`
check. Chunk B will be in range [0x3008,0x6010], and the BPF program is able
to edit [0x3010,0x6010]. Due to the ring buffer memory layout mentioned
earlier, the ranges [0x0,0x4000] and [0x4000,0x8000] point to the same data
pages. This means that chunk B at [0x4000,0x4008] is chunk A's header.
bpf_ringbuf_submit() / bpf_ringbuf_discard() use the header's pg_off to then
locate the bpf_ringbuf itself via bpf_ringbuf_restore_from_rec(). Once chunk
B modified chunk A's header, then bpf_ringbuf_commit() refers to the wrong
page and could cause a crash.
Fix it by calculating the oldest pending_pos and check whether the range
from the oldest outstanding record to the newest would span beyond the ring
buffer size. If that is the case, then reject the request. We've tested with
the ring buffer benchmark in BPF selftests (./benchs/run_bench_ringbufs.sh)
before/after the fix and while it seems a bit slower on some benchmarks, it
is still not significantly enough to matter. |
| In the Linux kernel, the following vulnerability has been resolved:
tcp: avoid too many retransmit packets
If a TCP socket is using TCP_USER_TIMEOUT, and the other peer
retracted its window to zero, tcp_retransmit_timer() can
retransmit a packet every two jiffies (2 ms for HZ=1000),
for about 4 minutes after TCP_USER_TIMEOUT has 'expired'.
The fix is to make sure tcp_rtx_probe0_timed_out() takes
icsk->icsk_user_timeout into account.
Before blamed commit, the socket would not timeout after
icsk->icsk_user_timeout, but would use standard exponential
backoff for the retransmits.
Also worth noting that before commit e89688e3e978 ("net: tcp:
fix unexcepted socket die when snd_wnd is 0"), the issue
would last 2 minutes instead of 4. |
| In the Linux kernel, the following vulnerability has been resolved:
netrom: Fix a memory leak in nr_heartbeat_expiry()
syzbot reported a memory leak in nr_create() [0].
Commit 409db27e3a2e ("netrom: Fix use-after-free of a listening socket.")
added sock_hold() to the nr_heartbeat_expiry() function, where
a) a socket has a SOCK_DESTROY flag or
b) a listening socket has a SOCK_DEAD flag.
But in the case "a," when the SOCK_DESTROY flag is set, the file descriptor
has already been closed and the nr_release() function has been called.
So it makes no sense to hold the reference count because no one will
call another nr_destroy_socket() and put it as in the case "b."
nr_connect
nr_establish_data_link
nr_start_heartbeat
nr_release
switch (nr->state)
case NR_STATE_3
nr->state = NR_STATE_2
sock_set_flag(sk, SOCK_DESTROY);
nr_rx_frame
nr_process_rx_frame
switch (nr->state)
case NR_STATE_2
nr_state2_machine()
nr_disconnect()
nr_sk(sk)->state = NR_STATE_0
sock_set_flag(sk, SOCK_DEAD)
nr_heartbeat_expiry
switch (nr->state)
case NR_STATE_0
if (sock_flag(sk, SOCK_DESTROY) ||
(sk->sk_state == TCP_LISTEN
&& sock_flag(sk, SOCK_DEAD)))
sock_hold() // ( !!! )
nr_destroy_socket()
To fix the memory leak, let's call sock_hold() only for a listening socket.
Found by InfoTeCS on behalf of Linux Verification Center
(linuxtesting.org) with Syzkaller.
[0]: https://syzkaller.appspot.com/bug?extid=d327a1f3b12e1e206c16 |
| In the Linux kernel, the following vulnerability has been resolved:
tracing: Build event generation tests only as modules
The kprobes and synth event generation test modules add events and lock
(get a reference) those event file reference in module init function,
and unlock and delete it in module exit function. This is because those
are designed for playing as modules.
If we make those modules as built-in, those events are left locked in the
kernel, and never be removed. This causes kprobe event self-test failure
as below.
[ 97.349708] ------------[ cut here ]------------
[ 97.353453] WARNING: CPU: 3 PID: 1 at kernel/trace/trace_kprobe.c:2133 kprobe_trace_self_tests_init+0x3f1/0x480
[ 97.357106] Modules linked in:
[ 97.358488] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 6.9.0-g699646734ab5-dirty #14
[ 97.361556] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
[ 97.363880] RIP: 0010:kprobe_trace_self_tests_init+0x3f1/0x480
[ 97.365538] Code: a8 24 08 82 e9 ae fd ff ff 90 0f 0b 90 48 c7 c7 e5 aa 0b 82 e9 ee fc ff ff 90 0f 0b 90 48 c7 c7 2d 61 06 82 e9 8e fd ff ff 90 <0f> 0b 90 48 c7 c7 33 0b 0c 82 89 c6 e8 6e 03 1f ff 41 ff c7 e9 90
[ 97.370429] RSP: 0000:ffffc90000013b50 EFLAGS: 00010286
[ 97.371852] RAX: 00000000fffffff0 RBX: ffff888005919c00 RCX: 0000000000000000
[ 97.373829] RDX: ffff888003f40000 RSI: ffffffff8236a598 RDI: ffff888003f40a68
[ 97.375715] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
[ 97.377675] R10: ffffffff811c9ae5 R11: ffffffff8120c4e0 R12: 0000000000000000
[ 97.379591] R13: 0000000000000001 R14: 0000000000000015 R15: 0000000000000000
[ 97.381536] FS: 0000000000000000(0000) GS:ffff88807dcc0000(0000) knlGS:0000000000000000
[ 97.383813] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 97.385449] CR2: 0000000000000000 CR3: 0000000002244000 CR4: 00000000000006b0
[ 97.387347] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 97.389277] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 97.391196] Call Trace:
[ 97.391967] <TASK>
[ 97.392647] ? __warn+0xcc/0x180
[ 97.393640] ? kprobe_trace_self_tests_init+0x3f1/0x480
[ 97.395181] ? report_bug+0xbd/0x150
[ 97.396234] ? handle_bug+0x3e/0x60
[ 97.397311] ? exc_invalid_op+0x1a/0x50
[ 97.398434] ? asm_exc_invalid_op+0x1a/0x20
[ 97.399652] ? trace_kprobe_is_busy+0x20/0x20
[ 97.400904] ? tracing_reset_all_online_cpus+0x15/0x90
[ 97.402304] ? kprobe_trace_self_tests_init+0x3f1/0x480
[ 97.403773] ? init_kprobe_trace+0x50/0x50
[ 97.404972] do_one_initcall+0x112/0x240
[ 97.406113] do_initcall_level+0x95/0xb0
[ 97.407286] ? kernel_init+0x1a/0x1a0
[ 97.408401] do_initcalls+0x3f/0x70
[ 97.409452] kernel_init_freeable+0x16f/0x1e0
[ 97.410662] ? rest_init+0x1f0/0x1f0
[ 97.411738] kernel_init+0x1a/0x1a0
[ 97.412788] ret_from_fork+0x39/0x50
[ 97.413817] ? rest_init+0x1f0/0x1f0
[ 97.414844] ret_from_fork_asm+0x11/0x20
[ 97.416285] </TASK>
[ 97.417134] irq event stamp: 13437323
[ 97.418376] hardirqs last enabled at (13437337): [<ffffffff8110bc0c>] console_unlock+0x11c/0x150
[ 97.421285] hardirqs last disabled at (13437370): [<ffffffff8110bbf1>] console_unlock+0x101/0x150
[ 97.423838] softirqs last enabled at (13437366): [<ffffffff8108e17f>] handle_softirqs+0x23f/0x2a0
[ 97.426450] softirqs last disabled at (13437393): [<ffffffff8108e346>] __irq_exit_rcu+0x66/0xd0
[ 97.428850] ---[ end trace 0000000000000000 ]---
And also, since we can not cleanup dynamic_event file, ftracetest are
failed too.
To avoid these issues, build these tests only as modules. |
| In the Linux kernel, the following vulnerability has been resolved:
crypto: hisilicon/sec - Fix memory leak for sec resource release
The AIV is one of the SEC resources. When releasing resources,
it need to release the AIV resources at the same time.
Otherwise, memory leakage occurs.
The aiv resource release is added to the sec resource release
function. |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring/sqpoll: work around a potential audit memory leak
kmemleak complains that there's a memory leak related to connect
handling:
unreferenced object 0xffff0001093bdf00 (size 128):
comm "iou-sqp-455", pid 457, jiffies 4294894164
hex dump (first 32 bytes):
02 00 fa ea 7f 00 00 01 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace (crc 2e481b1a):
[<00000000c0a26af4>] kmemleak_alloc+0x30/0x38
[<000000009c30bb45>] kmalloc_trace+0x228/0x358
[<000000009da9d39f>] __audit_sockaddr+0xd0/0x138
[<0000000089a93e34>] move_addr_to_kernel+0x1a0/0x1f8
[<000000000b4e80e6>] io_connect_prep+0x1ec/0x2d4
[<00000000abfbcd99>] io_submit_sqes+0x588/0x1e48
[<00000000e7c25e07>] io_sq_thread+0x8a4/0x10e4
[<00000000d999b491>] ret_from_fork+0x10/0x20
which can can happen if:
1) The command type does something on the prep side that triggers an
audit call.
2) The thread hasn't done any operations before this that triggered
an audit call inside ->issue(), where we have audit_uring_entry()
and audit_uring_exit().
Work around this by issuing a blanket NOP operation before the SQPOLL
does anything. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Avoid splat in pskb_pull_reason
syzkaller builds (CONFIG_DEBUG_NET=y) frequently trigger a debug
hint in pskb_may_pull.
We'd like to retain this debug check because it might hint at integer
overflows and other issues (kernel code should pull headers, not huge
value).
In bpf case, this splat isn't interesting at all: such (nonsensical)
bpf programs are typically generated by a fuzzer anyway.
Do what Eric suggested and suppress such warning.
For CONFIG_DEBUG_NET=n we don't need the extra check because
pskb_may_pull will do the right thing: return an error without the
WARN() backtrace. |
| In the Linux kernel, the following vulnerability has been resolved:
net/sched: act_api: fix possible infinite loop in tcf_idr_check_alloc()
syzbot found hanging tasks waiting on rtnl_lock [1]
A reproducer is available in the syzbot bug.
When a request to add multiple actions with the same index is sent, the
second request will block forever on the first request. This holds
rtnl_lock, and causes tasks to hang.
Return -EAGAIN to prevent infinite looping, while keeping documented
behavior.
[1]
INFO: task kworker/1:0:5088 blocked for more than 143 seconds.
Not tainted 6.9.0-rc4-syzkaller-00173-g3cdb45594619 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:0 state:D stack:23744 pid:5088 tgid:5088 ppid:2 flags:0x00004000
Workqueue: events_power_efficient reg_check_chans_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5409 [inline]
__schedule+0xf15/0x5d00 kernel/sched/core.c:6746
__schedule_loop kernel/sched/core.c:6823 [inline]
schedule+0xe7/0x350 kernel/sched/core.c:6838
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6895
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x5b8/0x9c0 kernel/locking/mutex.c:752
wiphy_lock include/net/cfg80211.h:5953 [inline]
reg_leave_invalid_chans net/wireless/reg.c:2466 [inline]
reg_check_chans_work+0x10a/0x10e0 net/wireless/reg.c:2481 |
| In the Linux kernel, the following vulnerability has been resolved:
ptp: fix integer overflow in max_vclocks_store
On 32bit systems, the "4 * max" multiply can overflow. Use kcalloc()
to do the allocation to prevent this. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: ipset: Fix suspicious rcu_dereference_protected()
When destroying all sets, we are either in pernet exit phase or
are executing a "destroy all sets command" from userspace. The latter
was taken into account in ip_set_dereference() (nfnetlink mutex is held),
but the former was not. The patch adds the required check to
rcu_dereference_protected() in ip_set_dereference(). |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/mlx5: Add check for srq max_sge attribute
max_sge attribute is passed by the user, and is inserted and used
unchecked, so verify that the value doesn't exceed maximum allowed value
before using it. |
| In the Linux kernel, the following vulnerability has been resolved:
KVM: arm64: Disassociate vcpus from redistributor region on teardown
When tearing down a redistributor region, make sure we don't have
any dangling pointer to that region stored in a vcpu. |
| In the Linux kernel, the following vulnerability has been resolved:
ACPICA: Revert "ACPICA: avoid Info: mapping multiple BARs. Your kernel is fine."
Undo the modifications made in commit d410ee5109a1 ("ACPICA: avoid
"Info: mapping multiple BARs. Your kernel is fine.""). The initial
purpose of this commit was to stop memory mappings for operation
regions from overlapping page boundaries, as it can trigger warnings
if different page attributes are present.
However, it was found that when this situation arises, mapping
continues until the boundary's end, but there is still an attempt to
read/write the entire length of the map, leading to a NULL pointer
deference. For example, if a four-byte mapping request is made but
only one byte is mapped because it hits the current page boundary's
end, a four-byte read/write attempt is still made, resulting in a NULL
pointer deference.
Instead, map the entire length, as the ACPI specification does not
mandate that it must be within the same page boundary. It is
permissible for it to be mapped across different regions. |
| In the Linux kernel, the following vulnerability has been resolved:
tipc: force a dst refcount before doing decryption
As it says in commit 3bc07321ccc2 ("xfrm: Force a dst refcount before
entering the xfrm type handlers"):
"Crypto requests might return asynchronous. In this case we leave the
rcu protected region, so force a refcount on the skb's destination
entry before we enter the xfrm type input/output handlers."
On TIPC decryption path it has the same problem, and skb_dst_force()
should be called before doing decryption to avoid a possible crash.
Shuang reported this issue when this warning is triggered:
[] WARNING: include/net/dst.h:337 tipc_sk_rcv+0x1055/0x1ea0 [tipc]
[] Kdump: loaded Tainted: G W --------- - - 4.18.0-496.el8.x86_64+debug
[] Workqueue: crypto cryptd_queue_worker
[] RIP: 0010:tipc_sk_rcv+0x1055/0x1ea0 [tipc]
[] Call Trace:
[] tipc_sk_mcast_rcv+0x548/0xea0 [tipc]
[] tipc_rcv+0xcf5/0x1060 [tipc]
[] tipc_aead_decrypt_done+0x215/0x2e0 [tipc]
[] cryptd_aead_crypt+0xdb/0x190
[] cryptd_queue_worker+0xed/0x190
[] process_one_work+0x93d/0x17e0 |
| In the Linux kernel, the following vulnerability has been resolved:
drop_monitor: replace spin_lock by raw_spin_lock
trace_drop_common() is called with preemption disabled, and it acquires
a spin_lock. This is problematic for RT kernels because spin_locks are
sleeping locks in this configuration, which causes the following splat:
BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 449, name: rcuc/47
preempt_count: 1, expected: 0
RCU nest depth: 2, expected: 2
5 locks held by rcuc/47/449:
#0: ff1100086ec30a60 ((softirq_ctrl.lock)){+.+.}-{2:2}, at: __local_bh_disable_ip+0x105/0x210
#1: ffffffffb394a280 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock+0xbf/0x130
#2: ffffffffb394a280 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip+0x11c/0x210
#3: ffffffffb394a160 (rcu_callback){....}-{0:0}, at: rcu_do_batch+0x360/0xc70
#4: ff1100086ee07520 (&data->lock){+.+.}-{2:2}, at: trace_drop_common.constprop.0+0xb5/0x290
irq event stamp: 139909
hardirqs last enabled at (139908): [<ffffffffb1df2b33>] _raw_spin_unlock_irqrestore+0x63/0x80
hardirqs last disabled at (139909): [<ffffffffb19bd03d>] trace_drop_common.constprop.0+0x26d/0x290
softirqs last enabled at (139892): [<ffffffffb07a1083>] __local_bh_enable_ip+0x103/0x170
softirqs last disabled at (139898): [<ffffffffb0909b33>] rcu_cpu_kthread+0x93/0x1f0
Preemption disabled at:
[<ffffffffb1de786b>] rt_mutex_slowunlock+0xab/0x2e0
CPU: 47 PID: 449 Comm: rcuc/47 Not tainted 6.9.0-rc2-rt1+ #7
Hardware name: Dell Inc. PowerEdge R650/0Y2G81, BIOS 1.6.5 04/15/2022
Call Trace:
<TASK>
dump_stack_lvl+0x8c/0xd0
dump_stack+0x14/0x20
__might_resched+0x21e/0x2f0
rt_spin_lock+0x5e/0x130
? trace_drop_common.constprop.0+0xb5/0x290
? skb_queue_purge_reason.part.0+0x1bf/0x230
trace_drop_common.constprop.0+0xb5/0x290
? preempt_count_sub+0x1c/0xd0
? _raw_spin_unlock_irqrestore+0x4a/0x80
? __pfx_trace_drop_common.constprop.0+0x10/0x10
? rt_mutex_slowunlock+0x26a/0x2e0
? skb_queue_purge_reason.part.0+0x1bf/0x230
? __pfx_rt_mutex_slowunlock+0x10/0x10
? skb_queue_purge_reason.part.0+0x1bf/0x230
trace_kfree_skb_hit+0x15/0x20
trace_kfree_skb+0xe9/0x150
kfree_skb_reason+0x7b/0x110
skb_queue_purge_reason.part.0+0x1bf/0x230
? __pfx_skb_queue_purge_reason.part.0+0x10/0x10
? mark_lock.part.0+0x8a/0x520
...
trace_drop_common() also disables interrupts, but this is a minor issue
because we could easily replace it with a local_lock.
Replace the spin_lock with raw_spin_lock to avoid sleeping in atomic
context. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: mt76: mt7921s: fix potential hung tasks during chip recovery
During chip recovery (e.g. chip reset), there is a possible situation that
kernel worker reset_work is holding the lock and waiting for kernel thread
stat_worker to be parked, while stat_worker is waiting for the release of
the same lock.
It causes a deadlock resulting in the dumping of hung tasks messages and
possible rebooting of the device.
This patch prevents the execution of stat_worker during the chip recovery. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/lima: mask irqs in timeout path before hard reset
There is a race condition in which a rendering job might take just long
enough to trigger the drm sched job timeout handler but also still
complete before the hard reset is done by the timeout handler.
This runs into race conditions not expected by the timeout handler.
In some very specific cases it currently may result in a refcount
imbalance on lima_pm_idle, with a stack dump such as:
[10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
...
[10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
...
[10136.669628] Call trace:
[10136.669634] lima_devfreq_record_idle+0xa0/0xb0
[10136.669646] lima_sched_pipe_task_done+0x5c/0xb0
[10136.669656] lima_gp_irq_handler+0xa8/0x120
[10136.669666] __handle_irq_event_percpu+0x48/0x160
[10136.669679] handle_irq_event+0x4c/0xc0
We can prevent that race condition entirely by masking the irqs at the
beginning of the timeout handler, at which point we give up on waiting
for that job entirely.
The irqs will be enabled again at the next hard reset which is already
done as a recovery by the timeout handler. |