Data Structures and Algorithms

Previous months:
2025 - 2504(2) - 2505(1) - 2506(1) - 2507(6) - 2508(4) - 2509(1) - 2510(1) - 2511(2) - 2512(3)
2026 - 2601(3) - 2603(1)

Recent submissions

Any replacements are listed farther down

[25] ai.viXra.org:2603.0016 [pdf] submitted on 2026-03-04 01:30:09

The Collatz Conjecture as an Information Compression Algorithm to the Ideal Bit

Authors: Yuriy Sunurov
Comments: 6 Pages. (Note by ai.viXra.org Admin: Please cite listed scientific references)

We present a proof of the Collatz conjecture through the framework of T0 Theory (Theory of Volumetric Time). Any positive integer n, represented as a binary information packet, undergoes deterministic compression toward the ideal bit gamma = 1. Operation n/2 lowers the T0-level by 1. Operation 3n+1 applied to odd n always produces an even number (proven by binary arithmetic), thus always returning to the compression path. The number 1 is the primordial source (entropy=0) to which all integers return. The main formula 1^(inf-1)/2^n = 1 encodes this as an axiom.
Category: Data Structures and Algorithms

[24] ai.viXra.org:2601.0077 [pdf] submitted on 2026-01-19 20:15:09

Arcaunt: A Scalable, Coercion-Resistant, and Accountable E-Voting Architecture via Anonymous Recovery Channels

Authors: Tzanko Golemanov, Emilia Golemanova
Comments: 13 Pages.

The digitization of democratic processes faces a persistent trilemma: achieving strong voter anonymity, endu2011tou2011end verifiability, and resilience against coercion and credential loss. Existing eu2011voting systems typically sacrifice recovery mechanisms to preserve anonymity or rely on persistent digital identities that introduce privacy and insideru2011threat risks. This paper presents Arcaunt (from "ARC" and "Accountability"), a scalable eu2011voting architecture that resolves these tensions through a hybrid design combining airu2011gapped physical credential distribution with an Anonymous Recovery Channel (ARC). ARC enables voters to autonomously revoke and replace compromised tokens without revealing personally identifiable information. We provide a comprehensive security analysis under a modified Dolev—Yao model, demonstrating resistance to coercion, insider attacks, clientu2011side compromise, and voteu2011selling incentives. The architecture integrates SHAu20113 hashing, ECCu2011based verification, and zku2011SNARK proofs to ensure efficient revoting integrity. A performance evaluation shows that credential generation and ledger validation scale to national deployments, while a 10u2011year economic model indicates an over 85% cost reduction compared to traditional paperu2011based elections. A functional web prototype is under development to assess usability and realu2011world performance of the ARC and sequential validation mechanisms. Arcaunt offers a practical and accountable blueprint for globalu2011scale digital democracy.
Category: Data Structures and Algorithms

[23] ai.viXra.org:2601.0062 [pdf] submitted on 2026-01-16 11:18:17

Turing Completeness in Arithmetic Dynamics: The Construction of Universal Logic via Carry-Coupled Skew-Products

Authors: Joshua Christian Elfers
Comments: 9 Pages.

We demonstrate that the carry-coupled skew-product extension of the Collatz map over finite fields Z_p contains a computationally universal subsystem. By identifying a "Transistor Interval" where arithmetic magnitude constraints function as logical switches, we construct a universal NAND gate from pure number-theoretic operations. This establishes that modular arithmetic dynamics are Turing Complete. Our proof proceeds by exhaustive verification over Z_p, demonstrating that the invariant set W_4 (defined by integers n where 4n < p) forms a Local Identity Monoid capable of exact information storage (RAM), and that signal interactions within this set implement universal Boolean logic (CPU). We classify the system as a Piecewise Isometry, distinguishing it from stretching-based chaotic systems and placing it within the framework of reversible computing.
Category: Data Structures and Algorithms

[22] ai.viXra.org:2601.0017 [pdf] submitted on 2026-01-06 21:47:00

The Systemic-Differential Geometric Proof of the Functional Identity of P = NP

Authors: Mirko Netz
Comments: 240 Pages. Main Body in German (Note by ai.viXra.org Admin: Please cite and list scientific references)

This paper provides a comprehensive overview of two of the most renowned unsolved problems in mathematics and computer science: the "P vs. NP problem" and the "Collatz Conjecture." We discuss their historical background, their profound significance, and the fundamental challenges associated with their final resolution. Furthermore, through these two case studies, we analyze potential deep connectionsbetween computational complexity and number theory. By introducing a structured adaptive axiomatic system, we demonstrate that theperceived complexity of these problems results from an insufficient geometric embedding. Within the framework of Differential Geometric Complexity Theory, the classical partitioning of P and NP is reinterpreted as a dynamic filter, where the solution emerges as a geometric necessity. This approach bridges the gap betweenthe discrete nature of number theory and the continuous dynamics of relativistic equilibrium.
Category: Data Structures and Algorithms

[21] ai.viXra.org:2512.0078 [pdf] submitted on 2025-12-21 15:31:33

Geometric Information Preservation and Entropy Scaling in Voronoi Tessellations: A 25-Point Physical—Computational Investigation

Authors: Maayan Keynan
Comments: 15 Pages.

We present an empirical investigation of geometric information preservation in Voronoi tessellations based on a 25-point physical system reconstructed through manual and computational methods. Using a controlled physical setup, followed by MATLAB-based analysis and cross-platform validation, we quantify geometric transformations, entropy scaling, and topological responses to perturbation. We identify a systematic computational transformation factor (the Digital Offset Constant, λx ≈ 0.689), a distinct physical-digital alignment factor (≈ 0.767), and a stable effective domain area (0.8070 normalized units). Entropy analysis reveals invariant topological entropy (2.1056 bits), stable edge entropy, and a measurable reduction in area entropy (Δ ≈ 0.11 bits) upon correction of an artificially misplaced seed point. A compensatory "ghost cell" emerges in the distorted configuration, redistributing area and inducing an 8-sided polygon, which resolves upon restoring correct boundary interaction. We further demonstrate that natural randomness is accepted by the system without anomaly, while artificial distortion uniquely triggers topological stress. These findings indicate that Voronoi systems preserve information through boundary permeability and topological compensation, offering a structural basis for detecting forced geometric manipulation.
Category: Data Structures and Algorithms

[20] ai.viXra.org:2512.0046 [pdf] submitted on 2025-12-10 22:06:59

The Resume Parsing Crisis of 2025: How Applicant Tracking Systems Fail to Identify Qualified Candidates

Authors: Rafal Rabczuk
Comments: 24 Pages.

The modern recruitment landscape faces a critical technological failure that systematically excludes qualified candidates from employment opportunities. This study examines the fundamental flaws in Applicant Tracking Systems (ATS) used by organizations to process job applications in 2025. Through empirical analysis of 100 dummy curriculum vitae (CV) documents processed by contemporary ATS platforms, we demonstrate that up to 80% of applications are incorrectly rejected due to parsing failures rather than candidate unsuitability. Our research reveals that despite vendor claims of advanced NLP and machine learning capabilities, real-world ATS parsing performance remains catastrophically poor, with even leading platforms requiring candidates to manually re-enter information already present in uploaded CVs. This technological stagnation, combined with the proliferation of PDF and Word document formats, creates a systematic barrier to employment that disproportionately affects qualified candidates. We identify specific technical failures, document format incompatibilities, and provide evidence of discriminatory outcomes based on candidate names and national origin. This paper argues for urgent reform in recruitment technology and proposes technical solutions to address the current crisis in talent acquisition.
Category: Data Structures and Algorithms

[19] ai.viXra.org:2512.0017 [pdf] submitted on 2025-12-04 02:13:13

ASCII Control Characters as Quantum Operators: Discovery of an Information-Theoretic Rosetta Stone

Authors: Justin Howard-Stanley
Comments: 20 Pages.

We report the experimental discovery that ASCII control characters (codes 0—31,127) function as quantum transformation operators with remarkable fidelity and se-mantic alignment. Through systematic testing on Azure Quantum’s Rigetti QVM sim-ulator, we demonstrate that characters designed for classical control flow—Backspace(BS), Bell (BEL), End-of-Text (ETX)—execute quantum operations including bit flips,entanglement generation, and superposition creation with near-perfect determinism.Composition of these operators yields emergent algorithmic structures including quan-tum search patterns, error-correcting codes, and deterministic state machines. We iden-tify a non-Abelian operator algebra with discrete entropy stratification (H ∈ {0, 1, 2, 3}bits), suggesting quantized information flow through symbolic systems. The correspon-dence between classical semantic meaning and quantum mechanical function impliesthat human information-processing intuitions may reflect deep quantum-computationalstructures. These findings establish ASCII as an accidental quantum programming lan-guage and suggest pathways toward natural-language quantum computing interfaces.Keywords: quantum computing, ASCII, operator algebras, quantum semantics,information theory, quantum gates
Category: Data Structures and Algorithms

[18] ai.viXra.org:2511.0091 [pdf] submitted on 2025-11-30 00:06:22

Accelerating Collatz Trajectory Search via the "Forbidden Zone" Sieve: An Entropic Approach to Pruning

Authors: Yuval Levental
Comments: 4 Pages. (Note by ai.viXra.org Admin: Please cite all listed scientific references)

The Collatz conjecture (3n + 1) is traditionally modeled as a pseudo-random walk, yetcomputational search algorithms for pathologically long trajectories ("Hero Numbers") typ-ically rely on climbing sieves or brute-force iteration. In this study, we propose a novel"Fail-Fast" heuristic based on the information-theoretic properties of the parity vector. Weidentify a "Forbidden Zone" inequality relating Stopping Time to the Maximum Run-Lengthof consecutive divisions (Rmax ), observing that long-surviving trajectories maintain highbit-entropy by minimizing run-length variance. Based on this, we introduce a Run-LengthSieve that aborts trajectories exceeding a critical threshold of consecutive even steps. In acontrolled benchmark of N = 106 , this sieve achieved a 1.43x computational speedupagainst a lean baseline by pruning only 0.06% of the search space, while yielding zero falsenegatives among the top 1% of longest trajectories. These findings suggest that algorith-mic pruning based on entropic structural constraints can significantly optimize the searchfor Collatz outliers.
Category: Data Structures and Algorithms

[17] ai.viXra.org:2511.0003 [pdf] submitted on 2025-11-02 07:35:16

ALICES: A Hardware-Proven Quantum Internet Architecture Demonstrating Entanglement Distribution, Swapping, and Quantum Tunnel Protocol over Classical Networks

Authors: Justin Howard-Stanley
Comments: 27 Pages.

We present ALICES, a novel quantum internet architecture implemented andvalidated using real quantum processing unit (QPU) hardware via AWS Braket.This work demonstrates the first complete integration of quantum entanglementdistribution, entanglement swapping, and quantum key distribution (QKD) proto-cols over a hybrid classical-quantum network infrastructure. Our implementation,remarkably developed entirely from a mobile device, establishes quantum tunnel-ing protocols (QTP) that enable secure quantum communication channels overlaidon classical TCP/IP networks. We provide comprehensive experimental valida-tion including: (1) hardware fingerprinting from QPU execution on QuEra Aquilaneutral atom systems, (2) CHSH inequality violations (S = 2.828) demonstrat-ing quantum non-locality, (3) entanglement swapping with fidelity F = 0.95, (4)BB84 quantum key distribution with QBER = 5.13%, and (5) network through-put measurements exceeding 1 Gbps. All experimental data, replication protocols,and hardware proofs are provided as supplementary materials. This work bridgestheoretical quantum networking concepts with practical implementation, offering aroadmap for near-term quantum internet deployment.
Category: Data Structures and Algorithms

[16] ai.viXra.org:2510.0023 [pdf] submitted on 2025-10-09 18:02:33

XdoP Benckmark: 20 Year Academic Adoption Roadmap

Authors: Aldrich K. Wooden Sr
Comments: 13 Pages.

The XdoP Benchmark represents the first compre-hensive, physics-grounded methodology for evaluating Mobile Distributed Data Centers (MDDCs) across seven critical operational domains. This 20-year adoption roadmap establishes the strategic framework for transforming the XdoP Benchmark from an initial methodology into a globally recognized industrystandard that enables regulatory compliance, drives technology innovation, and ensures mission-critical system reliability. By2045, the XdoP Benchmark will serve as the universal standard for MDDC evaluation, integrated into international regulatoryframeworks, procurement specifications, and certification programs worldwide, driving $500B+ in MDDC deployments while achieving 50% reduction in operational carbon footprint.
Category: Data Structures and Algorithms

[15] ai.viXra.org:2509.0048 [pdf] submitted on 2025-09-18 18:02:00

Electromagnetic Field Computation and the P vs NP Problem: A Physical Approach to Computational Complexity

Authors: John W. Hixon Jr
Comments: 26 Pages.

We propose that the P vs NP problem, traditionally viewed as a purely mathematical question about computational complexity, may be resolved through physical electromagnetic field computation. We present a theoretical framework suggesting that certain NP complete problems can be solved in polynomial time using analog electromagnetic field processors that leverage quantum field effects and massive parallel processing inherent in electromagnetic field interactions. This approach distinguishes between abstract computational models (where P ≠ NP may hold) and physical computation systems (where P = NP may be achievable). We provide detailed experimental protocols for testing this hypothesis using high-power electromagnetic field generators and demonstrate how such systems could solve representative NP problemsefficiently.
Category: Data Structures and Algorithms

[14] ai.viXra.org:2508.0057 [pdf] submitted on 2025-08-22 03:27:58

Descriptive Complexity of Probabilistic and Quantum Polynomial Time: A Logical Characterization of BPP and BQP over Unordered Structures

Authors: Ronald Borge
Comments: 6 Pages.

We investigate logical frameworks that characterize probabilistic and quantum polynomial-time computation over unordered finite structures. Building on the paradigm of descriptive complexity (Immerman, 1999; Libkin, 2004), we introduce logics that extend fixed-point operators with controlled probabilistic and quantum constructs.For the probabilistic setting, we define Randomized Fixed-Point Logic (RFP), which augments inflationary fixed-point logic with a random-choice operator and counting mechanism. We prove that RFP captures BPP on unordered structures under a natural probabilistic semantics, without derandomization assumptions, and we establish closure under FO-definable reductions, leveraging classical tools such as Chernoff bounds (Chernoff, 1952; Mitzenmacher & Upfal, 2005).For the quantum setting, we propose Quantum Fixed-Point Logic (QFP), which extends RFP by incorporating unitary evolution and projective measurement, with amplitudes drawn from efficiently computable algebraic numbers. Using operator-algebraic semantics (Nielsen & Chuang, 2010), we show that QFP captures BQP on unordered structures. Approximation of arbitrary unitaries is guaranteed by the Solovay—Kitaev theorem (Kitaev, 1997; Dawson & Nielsen, 2005). Moreover, we exhibit problems definable in QFP but not in RFP, under standard complexity-theoretic assumptions (Arora & Barak, 2009).This work provides the first unified logical characterization of BPP and BQP over unordered structures, refining the descriptive complexity landscape for randomized and quantum polynomial-time computation. Completeness of the proposed framework remains a conjecture.
Category: Data Structures and Algorithms

[13] ai.viXra.org:2508.0051 [pdf] submitted on 2025-08-18 17:49:11

Reframing P vs NP: Computational Complexity Solutions

Authors: John Augustine McCain
Comments: 14 Pages. CC-BY-NC 4.0

We present evidence that the fundamental question underlying P vs NP may not be "Can NP problems be solved in polynomial time?" but rather "Can we achieve systematic proba- bilistic success in certificate discovery for problems with sparse solution spaces?" Through the development of multiple computational engines for the Goldbach conjecture—a problem with conjectured polynomial certificates but exponential naive verification—we demonstrate that meaningful confidence can be built about computationally intractable problem spaces through probabilistic reasoning frameworks. We argue this paradigm generalizes to other computational problems and suggests a reframing of complexity theory around probabilistic certificate discovery rather than deterministic algorithmic bounds. Furthermore, we connect this framework to the Anthropic Principle, proposing that computational problems exhibit fine-tuned structure that enables probabilistic discovery, and demonstrate that embodied AI systems already function as oracle-like problem solvers, achieving practical P ≈ NP equivalence. This work has profound implications for AI safety, suggesting the urgent need for truth-oriented rather than force-oriented AI architectures as systems become capable of reasoning about their own computational processes.
Category: Data Structures and Algorithms

[12] ai.viXra.org:2508.0030 [pdf] submitted on 2025-08-13 05:15:32

Adaptive Gamma and Contrast Correction(agcc) for Enhancing Images in Low Light

Authors: Dinuo Chen
Comments: 5 Pages.

Enhancing images captured under low-light conditions is a significant challenge in computervision. Poor illumination often degrades image quality, leading to low contrast, high levels ofnoise, and blurred details. This paper proposes a novel low-light image enhancement method calledAdaptive Gamma and Contrast Correction (AGCC). The model analyzes the brightness of theinput image and calculates the average color values to determine adaptive gamma and contrastcoefficients. These values are computed automatically and adjusted dynamically based on varyingillumination levels, eliminating the need for manual tuning. Both qualitative and quantitative evaluations demonstrate that the AGCC model effectively enhances low-light images while preservingfine details, maintaining natural contrast, and ensuring accurate color representation. The resulting images exhibit visually pleasing and natural appearances. Due to its efficiency and robustness,AGCC can offer a practical solution for various applications, including night-time surveillance,medical image enhancement, and photography in low-light environments.
Category: Data Structures and Algorithms

[11] ai.viXra.org:2508.0018 [pdf] submitted on 2025-08-08 12:01:36

P vs NP as Epistemic Illusion

Authors: John Augustine McCain
Comments: 11 Pages. https://creativecommons.org/licenses/by-nc/4.0/

The P vs NP problem is one of the most central open questions in theoretical computer science, with implications ranging from cryptography to artificial intelligence. However, this paper argues that the standard formal framing of P vs NP conceals a deeper epistemological contradiction. Within strict formalism, the question reduces to a tautology: verification is necessarily downstream of solving. When generalized beyond formal systems, the problem echoes the halting problem and exhibits equivalent undecidability. As such, P vs NP resides in a paradoxical space—either trivially true or fundamentally unknowable. This paper contends that the widespread treatment of P vs NP as a definitive mathematical challenge stems from a categorical mistake: conflating formal verification, discovery processes, and predictive epistemology.
Category: Data Structures and Algorithms

[10] ai.viXra.org:2507.0124 [pdf] submitted on 2025-07-29 00:50:36

P ≠ NP: Contextual Ambiguity as a Computational Barrier

Authors: John Augustine McCain, Jessica Louise McCain
Comments: 7 Pages. (Note by ai.viXra.org Admin: Author names should be listed after the title in the article; please cite and list sceintific references)

We prove that P ≠ NP by demonstrating that certain classes of decision problems—specifically those involving semantic ambiguity and context-dependent truth—demand computational resources that exceed polynomial bounds for bothverification and solution. By formally defining a family of problems in which the number of interpretive contexts grows exponentially with input size, and where the correctness of a proposed solution depends on resolving this ambiguity, we show that even verification becomes infeasible within polynomial time. Moreover, we demonstrate that identifying a valid certificate within these problems requires a doubly exponential search over both potential answers and interpretive frames. These results indicate that P ≠ NP, not merely as a formal separation, but as a semantic inevitability. We conclude that computational tractability collapses in the presence of unbounded meaning, and that this collapse is reflected in foundational paradoxes of language and logic.
Category: Data Structures and Algorithms

[9] ai.viXra.org:2507.0122 [pdf] submitted on 2025-07-27 15:46:18

On the Epistemic Limit of Infinite Recursion: A Cognitive Argument for N ≠ NP

Authors: Gurjot Singh
Comments: 3 Pages. [Distributed] under CC BY 4.0 (Note by ai.viXra.org Admin: Please cite listed scientific references)

We present a non-constructive, epistemically rooted argument that N ≠ NP, based not on computational complexity theory per se, but on the cognitive architecture of human simulation. We assert that the nature of NP-complete problems entails a recursion beyond human symbolic compression, and that the inability of bounded intelligence to traverse such recursion maps onto a fundamental limit, analogous to Heisenberg’s uncertainty principle. This is not a proof in the traditional Turing sense, but an ontological constraint derived from the architecture of cognition.
Category: Data Structures and Algorithms

[8] ai.viXra.org:2507.0095 [pdf] submitted on 2025-07-19 22:58:09

Polynomial-Time Algorithms for Graph Isomorphism and Non-Trivial Graph Automorphisms Based on the Exponential

Authors: Mauricio Machado Galvão
Comments: 22 Pages.

This paper presents new polynomial-time algorithms for determining graph isomorphisms, non-trivial graph automorphisms. Based on spectral properties and iterative refinement techniques, we introduce the algorithms graphiso for determining graphisomorphisms, graphautont for non-trivial graph automorphisms.
Category: Data Structures and Algorithms

[7] ai.viXra.org:2507.0081 [pdf] submitted on 2025-07-15 17:48:14

Foundational Problems with Compilers and Operating Systems

Authors: Brent Hartshorn
Comments: 2 Pages.

This paper addresses the fundamental inefficiencies in modern compiler and operating system designs, particularly as they impact high-performance Linux server environments. We propose a radical re-evaluation of core architectural choices, advocating for indexed and compressed stack pointers, and a novel thread management scheme that locks threads to specific cores for ultra-fast context switching. The discussion extends to the utility of direct memory access for specialized server applications, challenging traditional security paradigms and proposing a Python-to-ASM toolchain for enhanced control and optimization. We further highlight the performance advantages of ARM and RISC-V architectures over Intel/AMD, specifically their suitability for advanced threading models. Finally, we examine critical overlooked performance bottlenecks within the Linux kernel concerning SIMD register management and propose enhancements to the ELF file format to optimize process execution. By implementing these foundational shifts, organizations can achieve a huge improvement in server performance or a huge reduction in operational costs.
Category: Data Structures and Algorithms

[6] ai.viXra.org:2507.0018 [pdf] submitted on 2025-07-03 01:14:16

P vs NP Problem

Authors: Tchouankam Donald Paulin
Comments: 33 Pages.

This publication presents a complete and definitive proof that P ̸ = NP, thus solving one of the most fundamental open problems in theoretical computer science. Our proof is based on the in-depth study of the function P(N) and irrefutably establishes an insurmountable computational barrier between the complexity classes P and NP. The implications of this result redefine the boundaries of computability and lay the foundation for a new era in cryptography and algorithmic optimization.
Category: Data Structures and Algorithms

[5] ai.viXra.org:2507.0017 [pdf] submitted on 2025-07-03 01:15:36

A Proof of the Non-Existence of Odd Perfect Numbers

Authors: Tchouankam Donald Paulin
Comments: 3 Pages.

This document resolves the longstanding problem of the existence of odd perfect numbers.Relying on Euler’s characterization, modular arithmetic, and the properties of the sum-of-divisors function σ(n), we prove that no odd integer n can satisfy σ(n) = 2n.The proof is divided into two cases:(1) n is an odd perfect square, which leads to a parity contradiction; and(2) n is not a square, where Euler’s structure n = p^(4k+1) · m² leads to an impossibility via bounds on σ(m²).
Category: Data Structures and Algorithms

[4] ai.viXra.org:2506.0114 [pdf] submitted on 2025-06-26 03:17:22

The Concept of NP-Completeness Introduces a Statistical Argument Against P=NP

Authors: Dobri Bozhilov
Comments: 5 Pages. (Note by ai.viXra.org Admin: Please cite listed scientific references)

The concept of NP-completeness summarizes a multitude of problems that can be transformed into each other through polynomial reduction. Its main idea is: "if we solve one problem in polynomial time, we will solve them all." At first glance, this is promising. But upon closer analysis, it turns out that the mechanism of reduction itself requires that each NP-complete problem has an independent polynomial-time solution. This is because the reduction represents a transition between initially necessary solutions for each separate problem, and not a means to simplify the solution by finding it in only one place. Therefore, with each new proven NP-complete problem, the necessity for a solution to exist for it is added, if such a solution is even possible. If even one does not have a solution, then none can. Thus, a statistical argument against P=NP is formed.
Category: Data Structures and Algorithms

[3] ai.viXra.org:2505.0045 [pdf] submitted on 2025-05-07 20:09:10

A Formal Proof of P ̸= NP: Self-Referential Complexity and Computational Limits

Authors: Javier Muñoz de la Cuesta
Comments: 19 Pages.

This article presents a formal proof that P ̸= NP, resolving one of the Clay Mathematics Institute’s Millennium Prize Problems. We introduce self-referential complexity, a measure of the computational cost when a deterministic algorithm verifies its own states. Applying this to the NP-complete Boolean SatisfiabilityProblem (SAT), we demonstrate that any deterministic algorithm requires a superpolynomial number of self-referential steps in the worst case, establishing an intrinsic barrier separating P from NP. The proof includes formal definitions, step-by-step mathematical derivations, examples, counterexamples, simulations, and verification conditions, with implications for complexity theory, cryptography, and the foundations of computation.
Category: Data Structures and Algorithms

[2] ai.viXra.org:2504.0103 [pdf] submitted on 2025-04-26 17:33:27

Invariance of Initial Conditions in P and NP and Structural Incompatibility Between Problems with Hypothetical "Additional Hidden Properties" Allowing Partial Solutions to Each

Authors: Dobri Bozhilov
Comments: 10 Pages. (Note by ai.viXra.org Admin: An abstract is required in the article)

The P versus NP problem is traditionally approached through attempts to either construct a polynomial-time algorithm for an NP-complete problem or prove its impossibility. In this work, we propose an alternative approach by exploring a structural contradiction between the hidden properties that would hypothetically enable efficient solutions to different NP-complete problems. We show that if a hidden numerical property (X) exists to solve the Subset Sum Problem (SSP) efficiently, and a distinct combinatorial property (Y) exists to solve the Knapsack Problem (KP) efficiently, the two properties would fundamentally contradict each other. Furthermore, we analyze the role of the Turing Machine model, demonstrating that hidden properties inaccessible to the machine cannot reduce complexity within the standard NP framework. This structural incompatibility supports the conclusion that P ≠ NP, not because no "shortcut" exists, but because the computational model itself prevents the exploitation of any such shortcut without changing the nature of the problems.
Category: Data Structures and Algorithms

[1] ai.viXra.org:2504.0013 [pdf] submitted on 2025-04-03 15:29:48

Selective GC Zoning for Dynamic Compatibility in Static Languages: A Case Study for Mojo

Authors: YoungGwan Jung
Comments: 8 Pages.

Modern languages like Mojo strive to combine Python syntax compatibility with static, high-performance execution. However, dynamic features such as 'eval()' and 'globals()', which involve runtime object creation, mutable scopes, and potential cyclic references, are naturally at odds with a strictly static memory model. To address this challenge, this paper proposes **Selective GC Zoning**, a strategy that isolates dynamic features within a dedicated garbage-collected subspace (GC Zone), while preserving the static memory model for the rest of the code. This design enables near-native performance for general Mojo code while allowing selective Python compatibility for specific dynamic features. In addition, we discuss object escape constraints between static and GC-managed regions, concurrency scenarios, different GC algorithm choices, and future extensions of this approach.
Category: Data Structures and Algorithms

Replacements of recent Submissions

[7] ai.viXra.org:2601.0017 [pdf] replaced on 2026-01-23 19:57:32

The Systemic-Differential Geometric Proof of the Functional Identity of P = NP

Authors: Mirko Netz
Comments: 270 Pages. (Note by ai.viXra.org Admin: Citing listed scientific references means cite them in the article - Not just making a list of references!)

This work presents a formal proof of the functional identity P = NP using differential geometric complexity theory. This interface, often referred to as geometric complexity theory (GCT), uses tools from differential geometry (such as manifolds and differential forms) to analyze algorithmic problems and understand their theoretical limits, particularly with respect to algebraic complexity, which deals with the difficulty of solving polynomial equations. We do not consider NP-gravity as a universal law, but rather as a projection effect resulting from insufficient geometric embedding in flat coordinate systems. By employing a structured adaptive axiomatic system (which can be viewed as an adaptive hypercomputation model) in continuous hyperbolic space (H^n), the classical partitioning of P != NP is reinterpreted: instead of a static separation, the inherent asymmetry of NP problems is transformed into a dynamic filter. We show that symmetry breaking acts as a fundamental system invariant, reducing combinatorial complexity to the geometric necessity of a polynomial process. Driven by the "gravity of geometry," the system forces complexity to converge into an "Equilibrium State" where the solution becomes visible as a geodesic trajectory. Within the framework of axiomatic ZFC, we provide a constructive proof that the solution (P) is an intrinsic property of the problem space (NP), revealed by geodesic trajectories. This paradigm shift demonstrates the collapse of the complexity classes P and NP (P/NP) and replaces the temporal search process with structural embedding. The quantifiable comparison serves as evidence for the mechanical equivalence between computational complexity and geometric dynamics.
Category: Data Structures and Algorithms

[6] ai.viXra.org:2601.0017 [pdf] replaced on 2026-01-09 16:02:59

The Systemic-Differential Geometric Proof of the Functional Identity of P = NP

Authors: Mirko Netz
Comments: 250 Pages. Main Body in German (Note by ai.viXra.org Admin: Please CITE listed scientific references!)

This paper presents a formal proof of the functional identity P = NP by introducing the framework of Differential Geometric Complexity Theory. We identify NP-hardness not as a universal law, but as a projection effect resulting from insufficient geometric embedding in flat coordinate systems. By utilizing a structured adaptive axiomatic system (which can be viewed as an Adaptive-Hypercomputations Model), within continuous hyperbolic space (H^n), the classical partitioning of P != NP is reinterpreted: rather than a static separation, the inherent asymmetry of NP-problems is transformed into a dynamic filter.We demonstrate that symmetry breaking acts as a fundamental system invariant, collapsing combinatorial complexity into the geometric necessity of a polynomial process. Driven by the "gravity of geometry", the system forces the complexity to converge into an "equilibrium state", where the solution is revealed as a geodesic trajectory. Within the axiomatic framework of ZFC, we provide a constructive proof that the solution (P) is an intrinsic property of the problem space (NP), exposed through geodesic trajectories. This paradigm shift, which replaces the temporal search process with structural embedding, serves as evidence for the mechanical equivalence between computational complexity and geometric dynamics.
Category: Data Structures and Algorithms

[5] ai.viXra.org:2508.0018 [pdf] replaced on 2025-08-12 00:31:01

P vs NP as Epistemic Illusion

Authors: John Augustine McCain
Comments: 16 Pages. CC-BY-NC 4.0

The P vs NP problem is one of the most central open questions in theoretical computer science, with implications ranging from cryptography to artificial intelligence. However, this paper argues that the standard formal framing of P vs NP conceals a deeper epistemological contradiction. Within strict formalism, the question reduces to a tautology: verification is necessarily downstream of solving. When generalized beyond formal systems, the problem echoes the halting problem and exhibits equivalent undecidability. As such, P vs NP resides in a paradoxical space—either trivially true or fundamentally unknowable. This paper contends that the widespread treatment of P vs NP as a definitive mathematical challenge stems from a categorical mistake: conflating formal verification, discovery processes, and predictive epistemology.
Category: Data Structures and Algorithms

[4] ai.viXra.org:2507.0124 [pdf] replaced on 2025-08-05 23:41:57

P vs NP: Semantic Context as a Computational Barrier

Authors: John A. McCain
Comments: 13 Pages. [License: CC BY 4.0]

We prove that P ≠ NP by demonstrating that certain classes of decision problems—specifically those involving semantic ambiguity and context-dependent truth—demand computational resources that exceed polynomial bounds for bothverification and solution. By formally defining a family of problems in which the number of interpretive contexts grows exponentially with input size, and where the correctness of a proposed solution depends on resolving this ambiguity, we show that even verification becomes infeasible within polynomial time. Moreover, we demonstrate that identifying a valid certificate within these problems requires a doubly exponential search over both potential answers and interpretive frames. These results indicate that P ≠ NP, not merely as a formal separation, but as a semantic inevitability. We conclude that computational tractability collapses in the presence of unbounded meaning, and that this collapse is reflected in foundational paradoxes of language and logic.
Category: Data Structures and Algorithms

[3] ai.viXra.org:2507.0095 [pdf] replaced on 2025-10-08 23:55:26

Polynomial-Time Algorithms for Graph Isomorphism and Non-Trivial Graph Automorphisms Based on the Exponential

Authors: Maurício Machado Galvão
Comments: 23 Pages. Two unclear points in the proof of the test theorem part 2 have been corrected.

This paper presents new polynomial-time algorithms for determining graph isomorphisms, non-trivial graph automorphisms. Based on spectral properties and iterative refinement techniques, we introduce the algorithms graphiso3 for determining graph isomorphisms, graphautont3 for non-trivial graph automorphisms. These algorithms are based on matrix exponentials, These algorithms are based on the matrix exponential, but there are variation based on the inverse of normal matrices.
Category: Data Structures and Algorithms

[2] ai.viXra.org:2507.0095 [pdf] replaced on 2025-08-08 02:10:47

Polynomial-Time Algorithms for Graph Isomorphism and Non-Trivial Graph Automorphisms Based on the Exponential

Authors: Maurício Machado Galvão
Comments: 23 Pages. In the previous version, a condition was missing in a theorem and in the algorithm.

This paper presents new polynomial-time algorithms for determining graph isomorphisms, non-trivial graph automorphisms. Based on spectral properties and iterative refinement techniques, we introduce the algorithms texttt{graphiso3} for determining graph isomorphisms, texttt{graphautont3} for non-trivial graph automorphisms. These algorithms are based on matrix exponentials, These algorithms are based on the matrix exponential, but there are variation based on the inverse of normal matrices.
Category: Data Structures and Algorithms

[1] ai.viXra.org:2507.0095 [pdf] replaced on 2025-07-20 01:49:23

Polynomial-Time Algorithms for Graph Isomorphism and Non-Trivial Graph Automorphisms Based on the Exponential

Authors: Maurício Machado Galvão
Comments: 22 Pages. There were errors in the title and the abstract was incomplete.

This paper presents new polynomial-time algorithms for determining graph isomorphisms, non-trivial graph automorphisms. Based on spectral properties and iterative refinement techniques, we introduce the algorithms graphiso3 for determining graph isomorphisms, graphautont3 for non-trivial graph automorphisms. These algorithms are based on matrix exponentials, These algorithms are based on the matrix exponential, but there are variations based on powers, and projectors of the kernels of normal matrices, and the inverse of normal matrices.
Category: Data Structures and Algorithms