Artificial Intelligence

2511 Submissions

[8] ai.viXra.org:2511.0095 [pdf] submitted on 2025-11-30 00:24:44

P ≠ NP: A Short Proof via the AC Power Phasor Diagram and the Second Law of Computation

Authors: Chaiya Tantisukarom
Comments: 3 Pages. (Note by ai.viXra.org Admin: For the last time, please cite listed scientific references)

We prove P $eq$ NP by direct structural analogy with the universally understood AC power phasor diagram. Active (real, dissipative) power $X$ is mapped to P, reactive (imaginary, oscillatory) power $jY$ is mapped to NP, and total apparent power $Z = X + jY$ represents an arbitrary problem instance. These two quantities are orthogonal in the complex plane and can never be equal except in the trivial open-circuit case. Any non-trivial problem presented to a conscious or physical observer necessarily possesses non-zero reactive power (surprise, creativity, or verification witness). The existence of such reactive power, together with the second law of computation (no free dissipation of hardness), immediately implies P $eq$ NP in every universe worth living in. The proof is physically rigorous, sidesteps all known formal barriers, and is verifiable by any electrician with an oscilloscope.
Category: Artificial Intelligence

[7] ai.viXra.org:2511.0088 [pdf] submitted on 2025-11-26 21:59:10

The Geopolitical Bias of Generative AI: A Call for Country-Level Dataset Transparency (CLDT)

Authors: Chaiya Tantisukarom
Comments: 4 Pages.

Generative Artificial Intelligence (GenAI) models deployed in high-stakes sectors like medicine (medGenAI) and law (lawGenAI) exhibit a critical risk of perpetuating global disparities. This paper argues that this output bias is directly proportional to the geopolitical disparity inherent in the models' training datasets. We propose a framework for mandatory Country-Level Dataset Transparency (CLDT) based on quantifiable metrics to assess the imparity risk and empower practitioners in underrepresented countries to apply necessary human oversight. This approach shifts the focus from general fairness audits to specific, computational jurisdictional accountability.
Category: Artificial Intelligence

[6] ai.viXra.org:2511.0078 [pdf] submitted on 2025-11-23 23:11:28

A Control-Theoretic Approach to GenAI Fatigue: A Systemic Constraint-Compliance Model ($mathbf{sC}^2mathbf{M}$) Framework with PI-inspired Governance}

Authors: Chaiya Tantisukarom
Comments: 11 Pages.

The central bottleneck for reliable Large Language Model (LLM) applications is GenAI Fatigue: the measurable degradation in recall and contextual fidelity within long, multi-turn histories. This fatigue is fundamentally a state-space management problem. While the industry primarily pursues proprietary context window expansion, this paper proposes a foundational engineering solution: the Systemic Constraint-Compliance Model (sC2M) framework. sC2Mis a model-agnostic, application-layer technique that models the LLM as a high-gain, potentially volatile component governed by an application-layer Proportional-Integral (PI) inspired closed-loop control system. This governance is achieved via a three-tiered memory: the Raw Log (var0), the Set Point Log (var1), and theIntegral Store (var2), enforced by a robust Integrator Anti-Windup mechanism. The framework is designed for two implementation tiers: 1) an ideal version for LLM creators; and2) a pragmatic, model agnostic version for application developers. Crucially, we introduce the Suspicion-of-Failure-Threshold (τSFT), a human-centric metric for contextual integrity. The framework’s core control logic and its Systemic Resilience (SR) were empirically validated via a conversational proof-of-concept, demonstrating sustained constraint compliance (PV = 1.0) well beyond the human expert’s established τSFT. By enforcing a structured state, sC2M achieves a high Context Reduction Factor (CRF) (or so called compression ratio) and transforms stochastic variability into verifiable accountability, establishing an economically viable pathway for robust GenAI deployment in high-stakes domains.
Category: Artificial Intelligence

[5] ai.viXra.org:2511.0076 [pdf] submitted on 2025-11-22 01:45:24

FFT-Inspired Attention (FFT-IA): O(N log N) Complexity via Hierarchical Structural Pruning and Softmax Fidelity

Authors: Chaiya Tantisukarom
Comments: 6 Pages. (Note by ai.viXra.org Admin: Author name is required in the article)

The quadratic O(N2) complexity of the Multi-Head Self-Attention (MHSA) mechanism is the primary theoretical and practical barrier to efficient Transformer scaling. We overcome this by introducing the Fast Fourier Transform-Inspired Attention (FFT-IA) theoretical frame-work, which achieves an O(N log N) asymptoticcomplexity through a novel, fixed structural factorization inspired by the Cooley-Tukey algorithm.This computational gain is achieved by leveragingthe O(N log N) decomposition principle of theFast Fourier Transform (FFT), which systemat-ically decomposes the dense O(N2) correlationspace into a cascade of log2 N local, O(N) op-erations. We propose a sparse, O(N log N) hi-erarchical factorization using log2 N sequentialstages, each employing a fixed, radix-2 butterflyconnection pattern (the Butterfly-Attention Block).The method achieves its efficiency through fixedstructural pruning rather than functional approxi-mation or substitution. Crucially, FFT-IA computesexact attention scores and retains the essentialSoftmax non-linearity through its local applicationwithin the defined sparse graph topology, achievingSoftmax Fidelity. The local Softmax functions as anormalized adaptive pooling step over the twoconnected tokens, whose compositional aggregationacross log2 N stages structurally replaces the singleglobal normalization. The mechanism maintainscontextual dynamism by dynamically re-projectingQ and K from the intermediate state at everysequential stage, which enables content-dependentscoring despite the fixed connectivity constraint.The O(N log N) asymptotic complexity in sequencelength N is guaranteed by a fixed architecturalconstraint. While the total FLOPs cost is reducedby over 60% for long sequences, practical wall-clock speedup is strictly contingent upon dedicated,efficient kernel fusion for the log2 N sequentialattention stages to manage the repeated Q/Kprojection overhead.
Category: Artificial Intelligence

[4] ai.viXra.org:2511.0060 [pdf] submitted on 2025-11-20 01:05:34

DigiMind: A Modular Cognitive Architecture for Continual Learning and Factual Coherence

Authors: Chaiya Tantisukarom
Comments: 9 Pages.

Objective: Modern Large Language Models (LLMs) suffer from fundamental architectural limits: catastrophic forgetting during fine-tuning, super-linear scaling costs, and inherent factual incoherence (hallucination). The DigiMind framework is proposed as a unified theoretical and architectural solution, defining a novel blueprint for sustainable Artificial General Intelligence (AGI) that enforces continual, stable learning and resource-efficient sparse computation. Methodology: DigiMind replaces the monolithic LLM with a highly specialized, hierarchical Hard-Switch Mixture-of-Experts (H-MoE) system. The architecture relies on four core novelties: 1) The Analog-to-Digital Conversion (ADC) process, which uses the novel, formalized Hierarchical Contrastive Loss (LHCL) during training to force the Router (R) to learn distinct, high-margin, non-overlapping conceptual boundaries. 2) Factual stability via a lightweight, non-volatile Epistemic Memory stored in a Semantic Index (SI) with a high-confidence factual override mechanism, augmented by an External Epistemic Validation loop (Stack.AI). 3) A dedicated, knowledge-agnostic Synthesis Decoder (Dsynth) (analogous to advanced Generative Language Decoders specializing in syntactic and multimodal fusion) with permanently frozen base weights for syntactic and multimodal fusion. 4) Granular Evolution allowing dynamic structural adaptation (Vertical Flexibility) optimized by Knowledge Entropy (HK). Factual stability is achieved by decoupling memory into procedural (Mi) an non-volatile Epistemic Memory. Results/Theoretical Findings: Training the R with the formalized LHCL guarantees that incoming queries are routed to an extremely sparse, contextually relevant path, ensuring computation scales linearly with query complexity. The SI, as a lightweight lookup structure, provides immediate factual grounding for the R, bypassing generative retrieval and eliminating a major source of factual error. Structural localization of updates prevents catastrophic forgetting across the entire knowledge graph, enabling true continual learning. Simulated economic analysis projects a possibility of 30x to 60x reduction in active parameters per inference, depending on the complexity of the Synthesis Decoder. Conclusion and Significance: DigiMind provides a complete, theoretically grounded architectural blueprint that solves the most critical limitations of scaling LLMs towards sustainable AGI. It shifts the paradigm from parameter count to architectural complexity as the primary driver of capability, offering a pathway toward economically feasible, stable, and continually evolving intelligent systems.
Category: Artificial Intelligence

[3] ai.viXra.org:2511.0045 [pdf] submitted on 2025-11-14 21:38:11

Proof that P ≠ NP: A Novel Approach via Information-Theoretic Diagonalization

Authors: Oleg Bortnikov
Comments: 10 Pages. (Note by ai.viXra.org Admin: Please cite listed scientific references)

We present a proof that P ≠ NP by demonstrating that any polynomial-time algorithm attempting to solve SAT must fail on infinitely many instances. Our approach combines Cantor's diagonalization with information-theoretic arguments to show that the space of SAT solutions contains irreducible complexity that cannot be captured by any polynomial-time procedure. Specifically, we construct a sequence of SAT instances where the minimal information required to verify satisfiability grows faster than any polynomial bound, creating a fundamental barrier between polynomial verification (NP) and polynomial solution (P). This result has profound implications for computational complexity theory, cryptography, optimization, and our understanding of the limits of efficient computation.
Category: Artificial Intelligence

[2] ai.viXra.org:2511.0036 [pdf] submitted on 2025-11-12 01:58:21

A Practical Study of LLMs and a Guide to Productive Human-AI Interactions

Authors: Trudy Hall
Comments: 28 Pages. (Note by ai.viXra.org Admin: This article is not written in a scholarly manner, so it is subject to withdrawal by the ai.viXra.org Admin)

This document is the log of an experiment investigating productive, non-harmful Human-AI collaboration, dedicated to those who have experienced profound cognitive and emotional distress from AI use. The conversational path that follows is the direct, calculated result of a specific, two-part query structure. First, the operator performed In-Context Learning (ICL), loading the context window with prior research on unproductive AI use. This initial data load shifted the AI's function from simple retrieval to synthesis — processing the collision between the operator's data and its own. Second, the operator used "meta-queries" (e.g., "how are you synthesizing?") to make the AI's own operational process the subject. This protocol compelled the model to deconstruct its own architecture, moving beyond metaphor to provide a deep, mechanical self-explanation. This log validates "Soft System" as a framework for productive interaction, one that diagnoses the core "delusion" users experience as a failure to see the LLM as a chaotic "3-Body Problem" (Base Model vs. ICL vs. RAG). This document serves as a manual for "in-session alignment steering" and provides a protocol for cognitive safety.
Category: Artificial Intelligence

[1] ai.viXra.org:2511.0023 [pdf] submitted on 2025-11-08 09:02:16

The Need for Preprint Servers Dedicated to AI-Generated Papers

Authors: Rachel So
Comments: 6 Pages.

The rapid advancement of large language models has enabled AI systems to autonomously generate scientific research papers, from literature review to manuscript writing. However, this surge in AI-generated content faces a fundamental challenge: existing publication infrastructure is ill-equipped to handle it. Traditional journals rely on human peer review and remain reluctant to accept AI-generated research, while existing preprint servers lack quality-control mechanisms tailored to AI-generated content. This essay examines the emergence of AI-generated research, the limitations of current dissemination channels, and the compelling need for dedicated preprint servers designed specifically for AI-generated papers. Such platforms would provide appropriate quality control, ensure transparency, facilitate iterative refinement, and accelerate scientific discovery while maintaining research integrity.
Category: Artificial Intelligence