Blog Details

  • Home
  • Blog
  • Security Flaws in Open-Source ML Tools Could Lead to Code Execution
Security Flaws in Open-Source ML Tools Could Lead to Code Execution

Security Flaws in Open-Source ML Tools Could Lead to Code Execution

Cybersecurity researchers have uncovered multiple security flaws in popular open-source machine learning (ML) tools and frameworks, including MLflow, H2O, PyTorch, and MLeap. These vulnerabilities, discovered by JFrog, could allow malicious actors to exploit ML clients, potentially enabling remote code execution (RCE) and exposing sensitive organizational data.

Key Vulnerabilities Identified

  1. MLflow (CVE-2024-27132)
    1. Severity: High (CVSS Score: 7.2)
    2. Impact: Cross-site scripting (XSS) attack leading to client-side RCE when running untrusted recipes in Jupyter Notebooks.
  1. H2O (CVE-2024-6960)
    1. Severity: High (CVSS Score: 7.5)
    2. Impact: Unsafe deserialization of untrusted ML models, potentially resulting in RCE.
  1. PyTorch (TorchScript Feature)
    1. Severity: Critical
    2. Impact: Path traversal vulnerability allowing denial-of-service (DoS) or code execution by overwriting critical system files or legitimate pickle files.
  1. MLeap (CVE-2023-5245)
    1. Severity: High (CVSS Score: 7.5)
    2. Impact: Path traversal vulnerability (Zip Slip) when loading saved models in zipped format, enabling arbitrary file overwrite and code execution.

Potential Consequences

Exploitation of these vulnerabilities could:

  1. Hijack ML Clients: Attackers could perform lateral movement, gaining access to critical ML services like Model Registries and MLOps Pipelines.
  2. Expose Sensitive Data: Credentials stored in model registries could be compromised, allowing attackers to backdoor stored models.
  3. Achieve RCE: Arbitrary code execution can result in significant operational disruptions or even complete system compromise.

Critical Recommendations

To mitigate these risks, organizations should:

  1. Restrict Access : Limit who can download or upload models to ML systems.
  2. Verify Model Sources : Avoid loading ML models from untrusted or unknown sources, even when using “safe” formats like Safetensors.
  3. Monitor Frameworks and Libraries : Regularly update ML tools and apply patches for known vulnerabilities.
  4. Implement Security Practices
    1. Use sandboxed environments when testing or deploying ML models.
    2. Employ static code analysis tools to identify vulnerabilities in ML scripts.

Broader Implications

These vulnerabilities underscore the need for robust supply chain security in ML operations. As AI and ML continue to grow in importance, their tools become attractive targets for cybercriminals. Organizations must adopt proactive strategies to secure their ML workflows and prevent exploitation of critical systems.

The vulnerabilities found in popular ML frameworks like MLflow, PyTorch, H2O, and MLeap serve as a stark reminder of the security challenges posed by open-source tools. With the potential for RCE and lateral movement, securing ML environments should be a top priority for organizations.

 

© 2016 - 2025 Red Secure Tech Ltd. Registered in England and Wales under Company Number: 15581067