| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| Passwords are stored in clear-text logs. An attacker can retrieve passwords. As for the affected products/models/versions, see the reference URL. |
| IBM Db2 for Linux, UNIX and Windows (includes Db2 Connect Server) 11.1 stores potentially sensitive information in log files that could be read by a local user. IBM X-Force ID: 281677. |
| The issue was addressed with improved checks. This issue is fixed in macOS Sonoma 14.5, watchOS 10.5, iOS 17.5 and iPadOS 17.5, iOS 16.7.8 and iPadOS 16.7.8. A maliciously crafted email may be able to initiate FaceTime calls without user authorization. |
| Vault and Vault Enterprise (“Vault”) may expose sensitive information when enabling an audit device which specifies the `log_raw` option, which may log sensitive information to other audit devices, regardless of whether they are configured to use `log_raw`. |
| Jenkins OpenId Connect Authentication Plugin 2.6 and earlier stores a password of a local user account used as an anti-lockout feature in a recoverable format, allowing attackers with access to the Jenkins controller file system to recover the plain text password of that account, likely gaining administrator access to Jenkins. |
| Jenkins Jira Plugin 3.11 and earlier does not set the appropriate context for credentials lookup, allowing attackers with Item/Configure permission to access and capture credentials they are not entitled to. |
| PyDrive2 is a wrapper library of google-api-python-client that simplifies many common Google Drive API V2 tasks. Unsafe YAML deserilization will result in arbitrary code execution. A maliciously crafted YAML file can cause arbitrary code execution if PyDrive2 is run in the same directory as it, or if it is loaded in via `LoadSettingsFile`. This is a deserilization attack that will affect any user who initializes GoogleAuth from this package while a malicious yaml file is present in the same directory. This vulnerability does not require the file to be directly loaded through the code, only present. This issue has been addressed in commit `c57355dc` which is included in release version `1.16.2`. Users are advised to upgrade. There are no known workarounds for this vulnerability. |
| Deserialization of Untrusted Data, Improper Input Validation vulnerability in Apache UIMA Java SDK, Apache UIMA Java SDK, Apache UIMA Java SDK, Apache UIMA Java SDK.This issue affects Apache UIMA Java SDK: before 3.5.0.
Users are recommended to upgrade to version 3.5.0, which fixes the issue.
There are several locations in the code where serialized Java objects are deserialized without verifying the data. This affects in particular:
* the deserialization of a Java-serialized CAS, but also other binary CAS formats that include TSI information using the CasIOUtils class;
* the CAS Editor Eclipse plugin which uses the the CasIOUtils class to load data;
* the deserialization of a Java-serialized CAS of the Vinci Analysis Engine service which can receive using Java-serialized CAS objects over network connections;
* the CasAnnotationViewerApplet and the CasTreeViewerApplet;
* the checkpointing feature of the CPE module.
Note that the UIMA framework by default does not start any remotely accessible services (i.e. Vinci) that would be vulnerable to this issue. A user or developer would need to make an active choice to start such a service. However, users or developers may use the CasIOUtils in their own applications and services to parse serialized CAS data. They are affected by this issue unless they ensure that the data passed to CasIOUtils is not a serialized Java object.
When using Vinci or using CasIOUtils in own services/applications, the unrestricted deserialization of Java-serialized CAS files may allow arbitrary (remote) code execution.
As a remedy, it is possible to set up a global or context-specific ObjectInputFilter (cf. https://openjdk.org/jeps/290 and https://openjdk.org/jeps/415 ) if running UIMA on a Java version that supports it.
Note that Java 1.8 does not support the ObjectInputFilter, so there is no remedy when running on this out-of-support platform. An upgrade to a recent Java version is strongly recommended if you need to secure an UIMA version that is affected by this issue.
To mitigate the issue on a Java 9+ platform, you can configure a filter pattern through the "jdk.serialFilter" system property using a semicolon as a separator:
To allow deserializing Java-serialized binary CASes, add the classes:
* org.apache.uima.cas.impl.CASCompleteSerializer
* org.apache.uima.cas.impl.CASMgrSerializer
* org.apache.uima.cas.impl.CASSerializer
* java.lang.String
To allow deserializing CPE Checkpoint data, add the following classes (and any custom classes your application uses to store its checkpoints):
* org.apache.uima.collection.impl.cpm.CheckpointData
* org.apache.uima.util.ProcessTrace
* org.apache.uima.util.impl.ProcessTrace_impl
* org.apache.uima.collection.base_cpm.SynchPoint
Make sure to use "!*" as the final component to the filter pattern to disallow deserialization of any classes not listed in the pattern.
Apache UIMA 3.5.0 uses tightly scoped ObjectInputFilters when reading Java-serialized data depending on the type of data being expected. Configuring a global filter is not necessary with this version. |
| When deserializing untrusted or corrupted data, it is possible for a reader to consume memory beyond the allowed constraints and thus lead to out of memory on the system.
This issue affects Java applications using Apache Avro Java SDK up to and including 1.11.2. Users should update to apache-avro version 1.11.3 which addresses this issue. |
| If an attacker gains write access to the Apache Superset metadata database, they could persist a specifically crafted Python object that may lead to remote code execution on Superset's web backend.
The Superset metadata db is an 'internal' component that is typically
only accessible directly by the system administrator and the superset
process itself. Gaining access to that database should
be difficult and require significant privileges.
This vulnerability impacts Apache Superset versions 1.5.0 up to and including 2.1.0. Users are recommended to upgrade to version 2.1.1 or later. |
| Java object deserialization issue in Jackrabbit webapp/standalone on all platforms allows attacker to remotely execute code via RMIVersions up to (including) 2.20.10 (stable branch) and 2.21.17 (unstable branch) use the component "commons-beanutils", which contains a class that can be used for remote code execution over RMI.
Users are advised to immediately update to versions 2.20.11 or 2.21.18. Note that earlier stable branches (1.0.x .. 2.18.x) have been EOLd already and do not receive updates anymore.
In general, RMI support can expose vulnerabilities by the mere presence of an exploitable class on the classpath. Even if Jackrabbit itself does not contain any code known to be exploitable anymore, adding other components to your server can expose the same type of problem. We therefore recommend to disable RMI access altogether (see further below), and will discuss deprecating RMI support in future Jackrabbit releases.
How to check whether RMI support is enabledRMI support can be over an RMI-specific TCP port, and over an HTTP binding. Both are by default enabled in Jackrabbit webapp/standalone.
The native RMI protocol by default uses port 1099. To check whether it is enabled, tools like "netstat" can be used to check.
RMI-over-HTTP in Jackrabbit by default uses the path "/rmi". So when running standalone on port 8080, check whether an HTTP GET request on localhost:8080/rmi returns 404 (not enabled) or 200 (enabled). Note that the HTTP path may be different when the webapp is deployed in a container as non-root context, in which case the prefix is under the user's control.
Turning off RMIFind web.xml (either in JAR/WAR file or in unpacked web application folder), and remove the declaration and the mapping definition for the RemoteBindingServlet:
<servlet>
<servlet-name>RMI</servlet-name>
<servlet-class>org.apache.jackrabbit.servlet.remote.RemoteBindingServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>RMI</servlet-name>
<url-pattern>/rmi</url-pattern>
</servlet-mapping>
Find the bootstrap.properties file (in $REPOSITORY_HOME), and set
rmi.enabled=false
and also remove
rmi.host
rmi.port
rmi.url-pattern
If there is no file named bootstrap.properties in $REPOSITORY_HOME, it is located somewhere in the classpath. In this case, place a copy in $REPOSITORY_HOME and modify it as explained. |
| Deserialization of Untrusted Data Vulnerability in Apache Software Foundation Apache InLong.This issue affects Apache InLong: from 1.4.0 through 1.7.0.
The attacker could bypass the current logic and achieve arbitrary file reading. To solve it, users are advised to upgrade to Apache InLong's 1.8.0 or cherry-pick https://github.com/apache/inlong/pull/8130 . |
| The JndiJmsConnectionFactoryProvider Controller Service, along with the ConsumeJMS and PublishJMS Processors, in Apache NiFi 1.8.0 through 1.21.0 allow an authenticated and authorized user to configure URL and library properties that enable deserialization of untrusted data from a remote location.
The resolution validates the JNDI URL and restricts locations to a set of allowed schemes.
You are recommended to upgrade to version 1.22.0 or later which fixes this issue. |
| Elasticsearch generally filters out sensitive information and credentials before logging to the audit log. It was found that this filtering was not applied when requests to Elasticsearch use certain deprecated URIs for APIs. The impact of this flaw is that sensitive information such as passwords and tokens might be printed in cleartext in Elasticsearch audit logs. Note that audit logging is disabled by default and needs to be explicitly enabled and even when audit logging is enabled, request bodies that could contain sensitive information are not printed to the audit log unless explicitly configured. |
| Kubernetes secrets-store-csi-driver in versions before 1.3.3 discloses service account tokens in logs. |
| A deserialization vulnerability existed when decode a malicious package.This issue affects Apache Dubbo: from 3.1.0 through 3.1.10, from 3.2.0 through 3.2.4.
Users are recommended to upgrade to the latest version, which fixes the issue. |
| In Apache Linkis <=1.3.1, because the parameters are not
effectively filtered, the attacker uses the MySQL data source and malicious parameters to
configure a new data source to trigger a deserialization vulnerability, eventually leading to
remote code execution.
Versions of Apache Linkis <= 1.3.0 will be affected.
We recommend users upgrade the version of Linkis to version 1.3.2. |
| In Apache Linkis <=1.3.1, due to the lack of effective filtering
of parameters, an attacker configuring malicious Mysql JDBC parameters in JDBC EengineConn Module will trigger a
deserialization vulnerability and eventually lead to remote code execution. Therefore, the parameters in the Mysql JDBC URL should be blacklisted. Versions of Apache Linkis <= 1.3.0 will be affected.
We recommend users upgrade the version of Linkis to version 1.3.2. |
| ** UNSUPPORTED WHEN ASSIGNED **
When using the Chainsaw or SocketAppender components with Log4j 1.x on JRE less than 1.7, an attacker that manages to cause a logging entry involving a specially-crafted (ie, deeply nested)
hashmap or hashtable (depending on which logging component is in use) to be processed could exhaust the available memory in the virtual machine and achieve Denial of Service when the object is deserialized.
This issue affects Apache Log4j before 2. Affected users are recommended to update to Log4j 2.x.
NOTE: This vulnerability only affects products that are no longer supported by the maintainer. |
| Jenkins CloudBees CD Plugin 1.1.32 and earlier follows symbolic links to locations outside of the directory from which artifacts are published during the 'CloudBees CD - Publish Artifact' post-build step, allowing attackers able to configure jobs to publish arbitrary files from the Jenkins controller file system to the previously configured CloudBees CD server. |