Total
35 CVE
CVE | Vendors | Products | Updated | CVSS v2 | CVSS v3 |
---|---|---|---|---|---|
CVE-2023-26031 | 1 Apache | 1 Hadoop | 2024-08-01 | N/A | 7.5 HIGH |
Relative library resolution in linux container-executor binary in Apache Hadoop 3.3.1-3.3.4 on Linux allows local user to gain root privileges. If the YARN cluster is accepting work from remote (authenticated) users, this MAY permit remote users to gain root privileges. Hadoop 3.3.0 updated the " YARN Secure Containers https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/SecureContainer.html " to add a feature for executing user-submitted applications in isolated linux containers. The native binary HADOOP_HOME/bin/container-executor is used to launch these containers; it must be owned by root and have the suid bit set in order for the YARN processes to run the containers as the specific users submitting the jobs. The patch " YARN-10495 https://issues.apache.org/jira/browse/YARN-10495 . make the rpath of container-executor configurable" modified the library loading path for loading .so files from "$ORIGIN/" to ""$ORIGIN/:../lib/native/". This is the a path through which libcrypto.so is located. Thus it is is possible for a user with reduced privileges to install a malicious libcrypto library into a path to which they have write access, invoke the container-executor command, and have their modified library executed as root. If the YARN cluster is accepting work from remote (authenticated) users, and these users' submitted job are executed in the physical host, rather than a container, then the CVE permits remote users to gain root privileges. The fix for the vulnerability is to revert the change, which is done in YARN-11441 https://issues.apache.org/jira/browse/YARN-11441 , "Revert YARN-10495". This patch is in hadoop-3.3.5. To determine whether a version of container-executor is vulnerable, use the readelf command. If the RUNPATH or RPATH value contains the relative path "./lib/native/" then it is at risk $ readelf -d container-executor|grep 'RUNPATH\|RPATH' 0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/:../lib/native/] If it does not, then it is safe: $ readelf -d container-executor|grep 'RUNPATH\|RPATH' 0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/] For an at-risk version of container-executor to enable privilege escalation, the owner must be root and the suid bit must be set $ ls -laF /opt/hadoop/bin/container-executor ---Sr-s---. 1 root hadoop 802968 May 9 20:21 /opt/hadoop/bin/container-executor A safe installation lacks the suid bit; ideally is also not owned by root. $ ls -laF /opt/hadoop/bin/container-executor -rwxr-xr-x. 1 yarn hadoop 802968 May 9 20:21 /opt/hadoop/bin/container-executor This configuration does not support Yarn Secure Containers, but all other hadoop services, including YARN job execution outside secure containers continue to work. | |||||
CVE-2022-25168 | 1 Apache | 1 Hadoop | 2024-02-04 | N/A | 9.8 CRITICAL |
Apache Hadoop's FileUtil.unTar(File, File) API does not escape the input file name before being passed to the shell. An attacker can inject arbitrary commands. This is only used in Hadoop 3.3 InMemoryAliasMap.completeBootstrapTransfer, which is only ever run by a local user. It has been used in Hadoop 2.x for yarn localization, which does enable remote code execution. It is used in Apache Spark, from the SQL command ADD ARCHIVE. As the ADD ARCHIVE command adds new binaries to the classpath, being able to execute shell scripts does not confer new permissions to the caller. SPARK-38305. "Check existence of file before untarring/zipping", which is included in 3.3.0, 3.1.4, 3.2.2, prevents shell commands being executed, regardless of which version of the hadoop libraries are in use. Users should upgrade to Apache Hadoop 2.10.2, 3.2.4, 3.3.3 or upper (including HADOOP-18136). | |||||
CVE-2021-25642 | 1 Apache | 1 Hadoop | 2024-02-04 | N/A | 8.8 HIGH |
ZKConfigurationStore which is optionally used by CapacityScheduler of Apache Hadoop YARN deserializes data obtained from ZooKeeper without validation. An attacker having access to ZooKeeper can run arbitrary commands as YARN user by exploiting this. Users should upgrade to Apache Hadoop 2.10.2, 3.2.4, 3.3.4 or later (containing YARN-11126) if ZKConfigurationStore is used. | |||||
CVE-2022-26612 | 2 Apache, Microsoft | 2 Hadoop, Windows | 2024-02-04 | 7.5 HIGH | 9.8 CRITICAL |
In Apache Hadoop, The unTar function uses unTarUsingJava function on Windows and the built-in tar utility on Unix and other OSes. As a result, a TAR entry may create a symlink under the expected extraction directory which points to an external directory. A subsequent TAR entry may extract an arbitrary file into the external directory using the symlink name. This however would be caught by the same targetDirPath check on Unix because of the getCanonicalPath call. However on Windows, getCanonicalPath doesn't resolve symbolic links, which bypasses the check. unpackEntries during TAR extraction follows symbolic links which allows writing outside expected base directory on Windows. This was addressed in Apache Hadoop 3.2.3 | |||||
CVE-2021-33036 | 1 Apache | 1 Hadoop | 2024-02-04 | 9.0 HIGH | 8.8 HIGH |
In Apache Hadoop 2.2.0 to 2.10.1, 3.0.0-alpha1 to 3.1.4, 3.2.0 to 3.2.2, and 3.3.0 to 3.3.1, a user who can escalate to yarn user can possibly run arbitrary commands as root user. Users should upgrade to Apache Hadoop 2.10.2, 3.2.3, 3.3.2 or higher. | |||||
CVE-2021-37404 | 1 Apache | 1 Hadoop | 2024-02-04 | 7.5 HIGH | 9.8 CRITICAL |
There is a potential heap buffer overflow in Apache Hadoop libhdfs native code. Opening a file path provided by user without validation may result in a denial of service or arbitrary code execution. Users should upgrade to Apache Hadoop 2.10.2, 3.2.3, 3.3.2 or higher. | |||||
CVE-2018-11765 | 1 Apache | 1 Hadoop | 2024-02-04 | 4.3 MEDIUM | 7.5 HIGH |
In Apache Hadoop versions 3.0.0-alpha2 to 3.0.0, 2.9.0 to 2.9.2, 2.8.0 to 2.8.5, any users can access some servlets without authentication when Kerberos authentication is enabled and SPNEGO through HTTP is not enabled. | |||||
CVE-2020-9492 | 1 Apache | 2 Hadoop, Solr | 2024-02-04 | 6.5 MEDIUM | 8.8 HIGH |
In Apache Hadoop 3.2.0 to 3.2.1, 3.0.0-alpha1 to 3.1.3, and 2.0.0-alpha to 2.10.0, WebHDFS client might send SPNEGO authorization header to remote URL without proper verification. | |||||
CVE-2018-11764 | 1 Apache | 1 Hadoop | 2024-02-04 | 9.0 HIGH | 8.8 HIGH |
Web endpoint authentication check is broken in Apache Hadoop 3.0.0-alpha4, 3.0.0-beta1, and 3.0.0. Authenticated users may impersonate any user even if no proxy user is configured. | |||||
CVE-2019-17195 | 3 Apache, Connect2id, Oracle | 15 Hadoop, Nimbus Jose\+jwt, Communications Cloud Native Core Security Edge Protection Proxy and 12 more | 2024-02-04 | 6.8 MEDIUM | 9.8 CRITICAL |
Connect2id Nimbus JOSE+JWT before v7.9 can throw various uncaught exceptions while parsing a JWT, which could result in an application crash (potential information disclosure) or a potential authentication bypass. | |||||
CVE-2012-2945 | 1 Apache | 1 Hadoop | 2024-02-04 | 5.0 MEDIUM | 7.5 HIGH |
Hadoop 1.0.3 contains a symlink vulnerability. | |||||
CVE-2018-11768 | 1 Apache | 1 Hadoop | 2024-02-04 | 5.0 MEDIUM | 7.5 HIGH |
In Apache Hadoop 3.1.0 to 3.1.1, 3.0.0-alpha1 to 3.0.3, 2.9.0 to 2.9.1, and 2.0.0-alpha to 2.8.4, the user/group information can be corrupted across storing in fsimage and reading back from fsimage. | |||||
CVE-2018-8029 | 1 Apache | 1 Hadoop | 2024-02-04 | 9.0 HIGH | 8.8 HIGH |
In Apache Hadoop versions 3.0.0-alpha1 to 3.1.0, 2.9.0 to 2.9.1, and 2.2.0 to 2.8.4, a user who can escalate to yarn user can possibly run arbitrary commands as root user. | |||||
CVE-2018-11767 | 1 Apache | 1 Hadoop | 2024-02-04 | 5.8 MEDIUM | 7.4 HIGH |
In Apache Hadoop 2.9.0 to 2.9.1, 2.8.3 to 2.8.4, 2.7.5 to 2.7.6, KMS blocking users or granting access to users incorrectly, if the system uses non-default groups mapping mechanisms. | |||||
CVE-2018-8009 | 1 Apache | 1 Hadoop | 2024-02-04 | 6.5 MEDIUM | 8.8 HIGH |
Apache Hadoop 3.1.0, 3.0.0-alpha to 3.0.2, 2.9.0 to 2.9.1, 2.8.0 to 2.8.4, 2.0.0-alpha to 2.7.6, 0.23.0 to 0.23.11 is exploitable via the zip slip vulnerability in places that accept a zip file. | |||||
CVE-2018-1296 | 1 Apache | 1 Hadoop | 2024-02-04 | 5.0 MEDIUM | 7.5 HIGH |
In Apache Hadoop 3.0.0-alpha1 to 3.0.0, 2.9.0, 2.8.0 to 2.8.3, and 2.5.0 to 2.7.5, HDFS exposes extended attribute key/value pairs during listXAttrs, verifying only path-level search access to the directory rather than path-level read permission to the referent. | |||||
CVE-2018-11766 | 1 Apache | 1 Hadoop | 2024-02-04 | 9.0 HIGH | 8.8 HIGH |
In Apache Hadoop 2.7.4 to 2.7.6, the security fix for CVE-2016-6811 is incomplete. A user who can escalate to yarn user can possibly run arbitrary commands as root user. | |||||
CVE-2017-15718 | 1 Apache | 1 Hadoop | 2024-02-04 | 5.0 MEDIUM | 9.8 CRITICAL |
The YARN NodeManager in Apache Hadoop 2.7.3 and 2.7.4 can leak the password for credential store provider used by the NodeManager to YARN Applications. | |||||
CVE-2017-15713 | 1 Apache | 1 Hadoop | 2024-02-04 | 4.0 MEDIUM | 6.5 MEDIUM |
Vulnerability in Apache Hadoop 0.23.x, 2.x before 2.7.5, 2.8.x before 2.8.3, and 3.0.0-alpha through 3.0.0-beta1 allows a cluster user to expose private files owned by the user running the MapReduce job history server process. The malicious user can construct a configuration file containing XML directives that reference sensitive files on the MapReduce job history server host. | |||||
CVE-2016-5001 | 1 Apache | 1 Hadoop | 2024-02-04 | 2.1 LOW | 5.5 MEDIUM |
This is an information disclosure vulnerability in Apache Hadoop before 2.6.4 and 2.7.x before 2.7.2 in the short-circuit reads feature of HDFS. A local user on an HDFS DataNode may be able to craft a block token that grants unauthorized read access to random files by guessing certain fields in the token. |