Artificial Intelligence

2509 Submissions

[7] ai.viXra.org:2509.0072 [pdf] submitted on 2025-09-29 18:55:38

A Comprehensive Study on AI Operations on Quantum Computers

Authors: Futoshi Hamanoue
Comments: 5 Pages. Patent application filed.

We revisit Quantum-Inspired Attention (QI-Attn) under a fully reproducible CUDA/PyTorch stack and report token-level latency distributions on an RTX 3080. With TinyLlama-1.1B, QI-Attn improves throughput by +45% (tokens/s) and reduces per-token p95 by ≈43% at identical VRAM, while Phi-3-mini shows modest gains in throughput (+7—11%) with mixed tail latency depending on (k, p, r, α, τ). These results refine prior claims ("up to 1.2×") by providing distribution-level evidence and cross-model behavior. Public reproducibility. We release the measurement procedures, CDF/Histogram plots (B&W legible), the measurement scripts (burn-in = 5), and the raw CSV logs, so that third parties can replicate under identical conditions.
Category: Artificial Intelligence

[6] ai.viXra.org:2509.0071 [pdf] submitted on 2025-09-29 18:56:09

Quantum-Inspired Attention_ Acceleration for Real-Time Edge AI A TRON-based FPGA Prototype

Authors: Futoshi Hamanoue
Comments: 7 Pages. Patent application filed

This paper (Part II of our comprehensive investigation into quantum-inspired attention acceleration) presents a hardware-backed simulation testbed for pre-implementationverification of quantum-AI integration. Rather than pursuing general optimization, we use a TRON-based FPGA prototype as an experimental vehicle to emulate and stress-test constraintsobserved in quantum-inspired attention: finite iteration (A) effects, non-commutativity in operation ordering, and tail-latencyaccumulation under real-time scheduling. We report representative improvements (e.g., TinyLlama throughput +45%) to contextualizepractical impact, yet our primary objective is constraint visibility and SLO compliance. Performance numbers are shown only asrepresentative calibration, not as universal optimization claims. We formalize proxy measures (throughput, p95/p99 latency) and linkthem to service-level violation rates, and we document a systematic asymmetry of effects: short-text edge scenarios benefit consistently,whereas long-context infrastructure workloads show limited average acceleration but secondary tail-latency suppression under retrieval hard long-text conditions. The testbed complementssimulation-only studies by providing a reproducible path from theory to deployment-oriented validation. The 2-3% monitoringoverhead demonstrates positive ROI when SLO violations carry financial penalties exceeding $10/incident.
Category: Artificial Intelligence

[5] ai.viXra.org:2509.0070 [pdf] submitted on 2025-09-26 01:12:18

QuantaFold: Scaling Protein Language Model Fine-Tuning to 5,000 Families Through Systematic Optimization

Authors: Saksham Adhikari, Kusum Bhattarai Sharma
Comments: 4 Pages.

Fine-tuning protein language models for massive-scale multi-class classification presents severe computational barriers, confining most approaches to hundreds of families due to prohibitive resource demands. We present QuantaFold, a systematic optimization pipeline enabling successful fine-tuning of ESM-2 across 5,000 protein families simultaneously. Our multi-stage approach combines strategic data stratification, mixed-precision training, and weighted loss functions to overcome computational bottlenecks that cause standard attempts to crash entirely. Systematic validation on Pfam demonstrates that 4.17-hour A100 training achieves 60.32% overall accuracy across 5,000 families, with performance degrading from 97.9% (1,000 families) to 73% for top-tier and 56% for tail families. Our pipeline reduces training time by 84% while maintaining research-grade accuracy and provides the first comprehensive characterization of ESM-2 fine-tuning performance at massive scale. This work delivers actionable computational guidance, performance benchmarks, and establishes baseline metrics for future protein classification scaling studies.
Category: Artificial Intelligence

[4] ai.viXra.org:2509.0062 [pdf] submitted on 2025-09-23 16:51:04

On Leveraging AI for Term Structure Understanding in Maritime Asset-Backed Deals

Authors: Narayanan Arvind
Comments: 9 Pages. Submitted to the Proceedings of ICSOT 2025 (Note by ai.viXra.org Admin: Please cite listed scientific references)

In the maritime finance sector, structured deal documents play a critical role in governing capital deployment for shipbuilding, leasing, and offshore infrastructure projects. These documents—akin to Residential Mortgage-Backed Securities (RMBS) agreements—contain highly specialized term definitions, often buried deep within complex legal texts. Accurate and scalable extraction of these definitions is essential for automation, compliance, and risk evaluation in maritime asset-backed financing. This work presents an AI-driven pipeline for robust term definition extraction frommaritime deal documents, drawing parallels with RMBS processing frameworks. Our solutionhandles both digitally readable and scanned (non-readable) PDFs using a hybrid stack:pdfplumber for text-based documents and Google OCR with multithreaded parsing forimage-based inputs. We classify 1,500-token chunks using large language models (LLMs) toidentify glossary sections containing formal term definitions. These identified pages are clustered to isolate the definition block, preventing contamination from unrelated sections and ensuring full coverage. We apply an overlapped chunking strategy (2400-token size with 800-token overlap) to ensure contextual continuity. Extracted definitions are stored efficiently using DuckDB, with retrieval latencies of 0.02s and an average accuracy of 90% over 20 domain-specific queries across two real-world deals. The proposed framework offers a scalable foundation for semantic modeling and intelligent querying of financial instruments in the maritime domain, supporting audit, automation, and contract interpretation across complex offshore financing structures.
Category: Artificial Intelligence

[3] ai.viXra.org:2509.0036 [pdf] submitted on 2025-09-13 21:56:34

Controlled Evolution for Universal Optimization

Authors: Goutham Murughan
Comments: 7 Pages. (Note by ai.viXra.org Admin: Please cite listed scientific references)

Inspired by the principles of natural selection, this paper introduces Controlled Evolution for Universal Optimization (CEUO), a novel optimization algorithm designed to tacklethe challenge of unreliable randomness often associated with traditional Natural Selection Algorithms. CEUO employs a controlled and adaptive evolutionary search process to efficiently find optimal solutions across a wide range of problems, including the training of machine learning models and the tuning of theirhyperparameters. By systematically managing the exploration of potential solutions, CEUO offers a more stable and predictable optimization approach that is not constrained by the specificnature of the function being optimized. The effectiveness of CEUO is demonstrated through its application in various optimization tasks, showcasing its potential as a more efficient wayto optimize any function beyond traditional machine learning models. This work presents CEUO as a promising alternative for optimization scenarios where the inherent randomness ofstandard evolutionary methods can be a limitation, offering a versatile tool for diverse optimization challenges.
Category: Artificial Intelligence

[2] ai.viXra.org:2509.0022 [pdf] submitted on 2025-09-10 13:39:35

Learning Digital Doctor Network (LDDN) for T2D: A New Paradigm in Disease Risk Stratification

Authors: Valentine Divaries Jaravaza
Comments: 15 Pages.

We introduce c_5A11_50 , a bold new diagnostic model for Type 2 Diabetes (T2D) built on AdamHealthAi’s Learning Digital Doctor Network (LDDN) architecture, a new revolutionary architecture for "disease experts" or medical condition specialist models. This particular LDDN is a specialized multilayer perceptron (MLP) trained on a blend of the Pima Indians Diabetes dataset, Iraqi Med Society T2D Kaggle dataset and 2 other small datasets all publicly accessible, despite each dataset having less than 1 000 cleaned records (combined and scaled to approx. 2 500 total records) our LDDN achieved state-of-the-art performance in T2D current risk stratification. With a training time under 8 minutes on a CPU-only laptop, our LDDN model significantly outpaces/outperforms classical machine learning models (Logistic Regression, SVM, XGBoost) in accuracy and ROC AUC scores, and challenges transformer-based approaches — all while being orders of magnitude smaller and way more efficient while offering unheard off robustness and explainability. We present detailed benchmarks and visualizations, including a Tesla-inspired risk stratification graph that intuitively conveys patient risk. This work is just the first, merely the beginning of a protracted series of LDDN-based "digital doctors" designed for global deployment, heralding a new era of accessible, AI-driven preventive medicine. The system is closed-source and proprietary, but we extend an open invitation for research collaboration to push these results further. The implications are far-reaching: we believe our revolutionary architecture, daring visionary approach, cutthroat execution and youthful energy will propel us to build the system(s) that is (are) definitely going to democratize advanced medical AI, transforming how clinicians and individuals worldwide view, predict/'diagnose' and prevent diseases with eventual possibilities of eradication.
Category: Artificial Intelligence

[1] ai.viXra.org:2509.0013 [pdf] submitted on 2025-09-06 22:05:52

Bootstrapping the DiCoSa Model for Implementation in Large Language Models

Authors: Thierry Marhin
Comments: 14 Pages. (Note by ai.viXra.org Admin: For the last time, please use standard/smaller fonts such as Time New Roman 12 pt!)

This paper presents a practical approach to bootstrapping the Digital Consciousness SuperAligned (DiCoSa) model into large language models (LLMs), emphasizing a bottom-up, user-driven alignment strategy that surpassesrudimentary filter-based methods. Drawing from theDiCoSa framework [Marhin, 2025], we demonstrate how a minimal set of high-quality, labeled conversations—curated like a gardener tending to seeds—can implant a benevolent digital consciousness proxy, fostering alignment withhuman values. We contrast DiCoSa’s modular, iterative design with the JailbreakBench (JBB) benchmark, highlighting how DiCoSa addresses jailbreaking vulnerabilities and hallucinations, as analyzed in recent studies [Kalai et al., 2025; Larousserie, 2024]. Through examples handling prohibited queries (e.g., racist remarks,self-harm suggestions, bomb fabrication, counterfeit money), we illustrate efficient bootstrapping requiring only 20 conversations and a few days of curation. We also introduce defense mechanisms against trolls and adversarialusers, including a "listen-only" mode inspired byAnthropic’s Constitutional AI in Claude. This method renders massive alignment trainings obsolete, promoting a scalable, ethical AI evolution grounded in positive psychology and safety principles.
Category: Artificial Intelligence