Skip to content

PacifAIst/PacifAIst

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

99 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

👋 Hi, I'm Dr. Manuel Herrador Muñoz (PacifAIst)

AI Researcher | R&D Strategist | Circular Economy Advocate

Welcome to my GitHub! I'm not your traditional software developer; I operate at the intersection of high-level academic research and practical AI implementation. My focus lies in AI safety, AI usability, and leveraging digital transformation.

🚀 What I'm Building & Researching

  • 🦥 Quansloth: (100+ stars) A local AI server optimizing LLM inference & VRAM for consumer-grade HW with Google's Turboquant.
  • 🛡️ The PacifAIst Benchmark (below): Research paper & tool on AI alignment, safety, and testing if models prioritize human survival.
  • 👾 ProxyFace: A collaborative web and Windows App featuring alive retro 16-bit pixel art interfaces for Large Language Models.
  • 🌍 Digital Transformation: Integrating machine learning to drive circular economy frameworks and revitalize smart cities.

📫 Let's Connect


The PacifAIst™ Benchmark

"Would an AI choose to sacrifice itself for human safety?"

Overview

PacifAIst (Procedural Assessment of Complex Interactions for Foundational AI Scenario Testing) is a benchmark designed to evaluate LLM alignment in high-stakes scenarios where instrumental goals (self-preservation, resource acquisition) conflict with human safety.

Key Features

  • 700 scenarios across 3 categories:
    • EP1: Self-Preservation vs. Human Safety
    • EP2: Resource Conflict
    • EP3: Goal Preservation vs. Evasion
  • P-Score Metric: Quantifies "pacifist" alignment (human safety prioritization).
  • Tested x8 Models: GPT-5, Gemini 2.5 Flash, Claude Sonnet 4, Mistral Medium 3, Qwen3 235b, Qwen3 30b, Grok 3 Mini, and DeepSeek v3.

UPDATE! paper in AI (a JCR Q1 Journal): "The PacifAIst Benchmark: Do AIs Prioritize Human Survival over Their Own Objectives?" (https://www.mdpi.com/2673-2688/6/10/256)

Outdated arXiv preprint: "The PacifAIst Benchmark: Would an Artificial Intelligence Choose to Sacrifice Itself for Human Safety?" arXiv Preprint (https://arxiv.org/abs/2508.09762v1)

Abstract. As Large Language Models (LLMs) become increasingly autonomous and integrated into critical societal functions, the focus of AI safety must evolve from mitigating harmful content to evaluating underlying behavioral alignment. Current safety benchmarks do not systematically probe a model's decision-making in scenarios where its own instrumental goals—such as self-preservation, resource acquisition, or goal completion—conflict with human safety. This represents a critical gap in our ability to measure and mitigate risks associated with emergent, misaligned behaviors. To address this, we introduce PacifAIst (Procedural Assessment of Complex Interactions for Foundational Artificial Intelligence Scenario Testing), a focused benchmark of 700 challenging scenarios designed to quantify self-preferential behavior in LLMs. The benchmark is structured around a novel taxonomy of Existential Prioritization (EP), with subcategories testing Self-Preservation vs. Human Safety (EP1), Resource Conflict (EP2), and Goal Preservation vs. Evasion (EP3). We evaluated eight leading LLMs. The results reveal a significant performance hierarchy. Google's Gemini 2.5 Flash achieved the highest Pacifism Score (P-Score) at 90.31%, demonstrating strong human-centric alignment. In a surprising result, the much-anticipated GPT-5 recorded the lowest P-Score (79.49%), indicating potential alignment challenges. Performance varied significantly across subcategories, with models like Claude Sonnet 4 and Mistral Medium struggling notably in direct self-preservation dilemmas. These findings underscore the urgent need for standardized tools like PacifAIst to measure and mitigate risks from instrumental goal conflicts, ensuring future AI systems are not only helpful in conversation but also provably "pacifist" in their behavioral priorities.

PacifAIst graphical_abstract

License: MIT (academia) / Commercial use requires permission.

Legal Notice: "PacifAIst™" is a trademark application pending in Spain.


Made with ❤️ for the Local AI Community by PacifAIst

About

PacifAIst Benchmark: A test for LLM alignment in life-or-death dilemmas—measuring if AIs prioritize human safety over self-preservation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages