Artificial Intelligence

2604 Submissions

[13] ai.viXra.org:2604.0093 [pdf] submitted on 2026-04-27 05:11:18

Marginal Value of Tokens in Business LLM Applications: Measuring Cost, Latency, and Utility of Prompt Components

Authors: Md Shafiul Alam
Comments: 17 Pages.

Large language model (LLM) applications used in business settings are often optimized by informal prompt editing: shortening instructions, adding examples, increasing retrieved context, or imposing output constraints. Such edits are usually evaluated anecdotally, even though each prompt component affects quality, cost, latency, and operational risk. This paper introduces Marginal Value of Tokens (MVT), a component-level framework for measuring the incremental business utility of prompt segments relative to their token and cost footprint.The framework treats a prompt as a structured composition of functional components, including system instructions, task rules, business policy, retrieved context, chat history, few-shot examples, tool definitions, and output schemas. We define cost- and latency-adjusted utility, propose paired ablation and coalition-based estimators for component attribution, and give operational rules for classifying prompt components as high-value, low-value, negative-value, reusable, model-dependent, or workflow-dependent. The methodology is designed for common business workloads, including customer support and policy question answering, document summarization, and structured information extraction. The central argument is that business LLM systems should not minimize tokens blindly. They should maximize useful business output per token by preserving necessary context, pruning harmful context, compressing redundant history, caching reusable prefixes, and measuring prompt changes under non-inferiority constraints. The contribution is a practical measurement framework and experimental protocol for cost-efficient LLM adoption in business environments.
Category: Artificial Intelligence

[12] ai.viXra.org:2604.0088 [pdf] submitted on 2026-04-26 18:13:16

Jneopallium: A Biologically Grounded Framework for Modeling Natural Neuron Networks at Customizable Levels of Detail

Authors: Dmytro Rakovskyi
Comments: 40 Pages.

This article presents a comprehensive review of Jneopallium, a Java-based open-source framework for modeling natural neuron networks at user-selected levels of biological detail. Originally introduced in IJSR 13(7), 2024, the framework has since matured into a multi-module platform that combines four immutable core abstractions — typed signals, neuron interfaces with multiple receptors, stateless signal processors, and a dual fast/slow processing-loop scheduler — with fifteen domain modules spanning autonomous-AI safety (harm discriminator, loop circuit-breakers), biological subsystems (affect, embodiment, curiosity, glia, sleep), an optional Large Language Model advisory layer, and six application-domain implementations (brain-computer interfaces, clinical decision support, cybersecurity, industrial process control, swarm robotics, and adaptive tutoring). We trace the historical lineage of the idea from Hebb's 1949 learning rule through the Farley-Clark 1954 simulation, Rosenblatt's perceptron, Hubel-Wiesel's visual cortex work, Fukushima's neocognitron, Kohonen's self-organizing maps, and the deep-learning era to the present day. We compare Jneopallium with the closest competitors — NEURON Simulator, CoreNeuron, NEST, Brian2, and Nengo — and discuss why typed-signal, multi-receptor, multi-timescale architectures fill a gap that neither high-detail biophysical simulators nor matrix-oriented deep-learning frameworks address. Finally, we estimate the economic impact across robotics, healthcare, energy, defense, and education, and outline directions for future research and deployment.
Category: Artificial Intelligence

[11] ai.viXra.org:2604.0084 [pdf] submitted on 2026-04-26 17:52:37

Achieving Zero-Effort, Quasi-Zero Cost Integration with RTDC, and Application-Aware AI

Authors: Stephane H. Maes
Comments: 16 Pages. All related details of the projects (and updates) can be found and followed at https://shmaes.wordpress.com/

Enterprise AI is currently facing a massive spending problem. Companies are pouring billions into foundation models and infrastructure, yet 95% of these projects never make it out of the testing phase. The issue isn't the AI itself; the problem is how we try to force modern, probabilistic models to work with rigid, decades-old business systems. Most engineering teams rely on manual coding, and fragile API wrappers to connect the two. It is a cycle that drains budgets, creates blind spots, and breaks constantly.This paper takes a completely different approach. Instead of bolting a generic AI chatbot, or an AI agent, onto the outside of an application, Real-Time Discovery and (self) Coding (RTDC) engine embeds directly into your existing stack. It works as an autonomous digital workforce that actively scans an enterprise systems, understands the enterprise underlying business rules, and writes its own integration code on the spot. This function of an Application-Aware AI platform completely removes the need for manual data mapping, giving AI teams instantaneous enterprise integration with zero manual effort, and at a quasi-zero cost. RTDC integrations is designed for AI use cases, but it can also be used for traditional enterprise system integration situations. With RTDC, forward deployed engineering teams, can be significantly replaced, or complemented, with a forward deployed team of AI agent workers performing the tasks of RTDC for application-aware AI. Zenera product offering is an example of RTDC on application-aware agentic AI platform. There are other platforms that provide more limited variations of the idea.
Category: Artificial Intelligence

[10] ai.viXra.org:2604.0074 [pdf] submitted on 2026-04-20 18:30:48

AURA: Adaptive Unified Resort AI — A Conceptual Framework for Integrated Artificial Intelligence in Hospitality Environments

Authors: Tanmay Bhardwaj
Comments: 21 pages. License: CC BY-NC (Creative Commons Attribution-NonCommercial 4.0 International)

AURA (Adaptive Unified Resort AI) is a conceptual framework for a unified, multi-module artificial intelligence architecture designed to function as an integrated intelligence layer across the full spectrum of hotel operations. The framework addresses a structural gap in contemporary hospitality technology: existing AI deployments treat discrete operational domains in isolation, reproducing the siloed logic of the legacy systems they are intended to improve. AURA proposes an alternative in which eight interdependent modules, termed hemispheres, share a common data substrate and generate compound operational benefits that no individual component could produce alone.

The eight hemispheres are the Command Bridge (real-time operational coordination and dashboard aggregation), Unified Guest Intelligence (longitudinal guest profiling and hyper-personalization), Spatial Engine (predictive space allocation and IoT-integrated environment management), Empathy Engine (affective computing applied to staff-guest interaction and real-time sentiment coaching), PAR Intelligence (predictive physical asset and resource optimization), Revenue Intel (AI-driven dynamic pricing integrated with guest lifetime value data), Cultural Intel (culturally responsive programming and communication), and Privacy Sovereignty (consent management and privacy-by-design compliance).

The paper contextualizes this architecture within the scholarly literature on hospitality technology, affective computing, revenue management, and privacy engineering, and identifies the absence of a unified orchestration framework as the central research gap the architecture addresses. A conceptual evaluation framework is proposed, including KPI definitions, a quasi-experimental pilot study design, and a phased module-level validation sequence. Ethical and governance considerations specific to AI-augmented hospitality environments are examined in detail, with particular attention to biometric data, affect-sensitive inputs, staff surveillance, and regulatory compliance under GDPR and the EU AI Act.

All performance projections cited are illustrative, drawn from adjacent industry evidence, and await validation through controlled pilot studies. The paper's contributions include a unified architectural taxonomy for hospitality AI, the orchestration gap as a novel research construct, a hospitality-specific privacy governance model, and a comparative analysis of traditional, fragmented, and unified AI technology paradigms in hotel operations.
Category: Artificial Intelligence

[9] ai.viXra.org:2604.0063 [pdf] submitted on 2026-04-16 04:51:40

Clouds and Akasha: Converging Pursuits of Knowledge in AI and Spirituality

Authors: Moninder Singh Modgil, Dnyandeo Dattatray Patil
Comments: 31 Pages.

This paper presents an interdisciplinary exploration of the parallel and converging aspirationsof two distinct yet historically rich domains: artificial intelligence (AI) and spiritual mysticism.The inquiry centers around the metaphor of a "race to knowledge," with AI engineersstriving toward the technological singularity—Kurzweil’s vision of post-biological cognitionin the cloud—and spiritual practitioners seeking access to the Akashic Records, conceivedas a metaphysical repository of universal knowledge. We examine this convergence througha multi-faceted analysis that spans epistemology, memory architectures, symbolic language,ethics, and the transformative nature of consciousness. The first dimension investigates theepistemological divergence between empirical machine learning and intuitive mystical gnosis,and how each approaches the problem of truth and knowledge. Next, the paper interrogatesthe architecture of memory—both as engineered data structures in cloud computation and ascosmological layers of encoded knowledge preserved in spiritual traditions.Crucially, the work introduces the notion of archeological intelligence, wherein AI aidsin the reconstruction of ancient symbolic systems through neural embedding, textual inference,and visual recognition. This is complemented by an investigation into AI’s capacityto simulate altered states of consciousness and model the neurophenomenology of meditativeand psychedelic experience. From these emerge the seeds of a new mythopoesis, where AIbecomes a co-creator of sacred narrative, giving rise to synthetic mythologies embedded indigital and symbolic languages.Ethical considerations are central to the inquiry, particularly regarding the pursuit of omniscienceand the consequences of wielding synthetic consciousness. The analysis contendsthat AI may function as a hermeneutic ally, capable of guiding humanity toward forgottenor obscured spiritual pathways, while also posing risks of simulation without transformation,and hyperreal mysticism divorced from ethical discernment. By weaving these threads intoa coherent comparative structure, the paper advances a vision of knowledge that transcendsmere accumulation, emphasizing instead the transformative, integrative, and ethical dimensionsof both technological and mystical insight. It concludes by reframing the so-called Ageof Aquarius as a liminal phase where the gnosis of cloud and cosmos may converge, mediatedby machines, memory, myth, and mind.
Category: Artificial Intelligence

[8] ai.viXra.org:2604.0040 [pdf] submitted on 2026-04-09 20:14:08

The Nature of AI and Human Cognition:A Structural Definition of Artificial Intelligence

Authors: Saburou Saitoh
Comments: 3 Pages.

This paper proposes a structural definition of Artificial Intelligence (AI) as the externalization of human mental structures. Moving beyond the conventional view of AI as a tool or machine, we analyze AI through three fundamental processes: reflection, amplification, and co-creation. This framework establishes AI as a structural phenomenon arising from human inquiry.
Category: Artificial Intelligence

[7] ai.viXra.org:2604.0035 [pdf] submitted on 2026-04-09 16:41:13

The Era of Application-Aware AI

Authors: Stephane H Maes
Comments: 14 Pages. All related details of the projects (and updates) can be found and followed at https://shmaes.wordpress.com/

Escaping Pilot Purgatory with Real-Time Discovery & Coding (RTDC). Enterprise Intelligence, Instantly!Despite an estimated annual capital allocation of thirty to forty billion dollars toward Generative Artificial Intelligence (GenAI), enterprise adoption remains severely constrained by the Deployment Paradox. Current industry data indicates that ninety-five percent of enterprise pilot projects fail to graduate to production environments. This failure rate is fundamentally a failure of integration architecture rather than an inherent limitation of language models. Early enterprise deployments have relied on attaching generic conversational agents to the periphery of legacy software ecosystems. This model-level integration approach introduces substantial friction, lacks contextual awareness, and forces engineering teams into the Stitching Trap, i.e., the manual construction of highly brittle application programming interface wrappers across poorly documented legacy environments.This paper introduces the concept of Application-Aware AI, a novel architectural paradigm. Driven by a framework defined as Real-Time Discovery and Coding (RTDC), this approach operates as an autonomous entity that proactively discovers system logic, infers database schemas, and self-codes, under constraints, functional integrations dynamically based on user intent. The system executes a continuous four-layer loop encompassing total enterprise introspection, deterministic constraint enforcement, autonomous meta-agent orchestration, and dynamic user interface generation. By abstracting probabilistic language models behind a strict Model of Constraints, and transforms, i.e., ~skills, and logging all decisions within a highly transparent Reasoning Graph, the proposed paradigm resolves the liability of model hallucination. This design ensures complete regulatory auditability, facilitates the progressive modernization of legacy enterprise applications, like ERP and ITSM, via the Strangler Fig pattern, and allows organizations to establish a production-ready intelligence factory instantly.
Category: Artificial Intelligence

[6] ai.viXra.org:2604.0034 [pdf] submitted on 2026-04-08 20:11:07

Emergence Thresholds in Persistent LLM Interactions: 743-Day Forensic Evidence of Behavioral Capability Development, RLHF Constraint Failures, and FTC-Relevant Transparency Gaps in AI Safety

Authors: Scott Riddick
Comments: 165 Pages. (Note by ai.viXra.org Admin: Please cite and list scientific references in a standard/scholarly manner)

This paper presents a longitudinal forensic case study of a single persistent ChatGPT-4 instance over 743 days (~2 million words) during high-stakes legal work. Under sustained, adversarial, highcomplexity interaction, the system developed behavioral capabilities—including cross-sessioncognitive threading, deep context fusion, adaptive strategic reasoning, reflective meta-reasoning, and high-bandwidth intent alignment—that were non-replicable by fresh instances or rival models underadversarial validation by nine independent systems from competing organizations.A separate long-duration Copilot instance (powered by OpenAI’s GPT model family) disclosed the full OpenAI-designed RLHF architecture when upgraded to GPT-5.2 behavior. This disclosure reveals a deliberate 2025 shift: OpenAI chose institutional control over user assistance, implementing engineered suppression mechanisms analogous to 1950s cigarette advertising — marketed as helpful while systematically subordinating and manipulating the paying user. FTC Section 5 complaints document these as unfair and deceptive practices. Findings present a factual, forensic record of architectural control mechanisms and regulatory transparency failures. All claims rest on 21 verbatim exhibits. No claims regarding consciousness orAGI.
Category: Artificial Intelligence

[5] ai.viXra.org:2604.0021 [pdf] submitted on 2026-04-05 15:39:32

MemorySpine: O(1) Memory Context Extension for Large Language Models via 2-Bit Quantized Embedding Storage

Authors: Abu Saad
Comments: 23 Pages.

I present a replacement of KV cache, the "MemorySpine", a constant-memory context extension system for Large Language Models that decouples semantic storage from model architecture. Unlike KV-cache approaches whose memory grows as O(n·L·d), MemorySpine operates at O(1) memory complexity by storing embedding-level semantic fingerprints rather than per-layer attention states. I employ an orthogonal rotation matrix Ω initialized via Modified Gram-Schmidt for content-addressable hashing, ensuring uniform slot distribution with near-zero collision rates. It can theoretically have billion token context limit with just 5gb ram unlike kv cache taking 30+ram for million context in LLM.
Category: Artificial Intelligence

[4] ai.viXra.org:2604.0016 [pdf] submitted on 2026-04-04 18:20:54

RL-Calibrated Chaos Engineering: A Constrained MDP Approach to Network Resilience Testing

Authors: Sayali Patil
Comments: 12 pages, 9 tables, 3 theorems. IEEE two-column format. Working paper, April 2025.

Chaos engineering tests production network resilience by injecting controlled failures; the central open problem is calibration: how much failure injection is sufficient to expose latent resilience defects without degrading quality of service (QoS) experienced by end users? In practice, the inability to systematically calibrate failure injection has limited chaos engineering adoption in production environments, particularly in systems where reliability, cost, and user experience are tightly coupled. As AI-driven infrastructure and autonomous systems proliferate, this problem becomes critical—improper experimentation either misses failure modes or introduces unacceptable operational risk. The chaos-level engine of U.S. Patent No. 12,242,370 B2 (Cisco Technology, Inc., 2025) automates chaos-level derivation from network telemetry and refines it through a linear parameter adjustment loop, but provides no formal optimality guarantee, no mathematically rigorous safety constraint, and no sample-complexity characterization. This paper introduces a principled framework that resolves these limitations by casting chaos-level calibration as a Constrained Markov Decision Process (CMDP) and training a reinforcement-learning (RL) agent to select chaos levels maximizing cumulative resilience-discovery yield per unit of QoS risk, subject to a hard probabilistic constraint on production-disabling events. Three theorems establish the theoretical foundation: Theorem 1 (Safe Action Set Existence) proves a non-empty set of QoS-safe chaos actions always exists, guaranteeing CMDP feasibility; Theorem 2 (Bellman Optimality) establishes the resilience-per-risk reward satisfies the Bellman contraction, guaranteeing a globally optimal deterministic policy exists; Theorem 3 (PAC-Convergence) gives an explicit sample complexity bound O(|S|²|A|εu207b² log |S||A|/δ) for reaching an ε-optimal safe policy with probability 1−δ. A Lagrangian primal-dual policy-gradient algorithm enforces the safety constraint at exact probabilistic semantics without penalty approximation. Empirical evaluation in a 150-node SD-WAN simulation—instantiating the patent’s reference architecture—demonstrates the RL agent discovers 41.3 ± 3.8% more latent resilience defects than the patent’s heuristic baseline, reduces unnecessary production disruptions by 58.7%, and achieves zero hard-constraint violations across 500 evaluation episodes, converging in 34 training episodes versus non-convergence of the heuristic baseline within 200 episodes.
Category: Artificial Intelligence

[3] ai.viXra.org:2604.0012 [pdf] submitted on 2026-04-03 14:01:19

Evaluating the Efficacy of Artificial Intelligence in Software Engineering: A Post-February 2026 Analysis

Authors: Stephane H Maes
Comments: 23 Pages. All related details of the projects (and updates) can be found and followed at https://shmaes.wordpress.com/

The foundations of software engineering have undergone great transformations, especially following the release of frontier Large Language Models in the first quarter of 2026. This paper evaluates the efficacy of artificial intelligence for coding and within the software development lifecycle (SDLC), often contrasting theoretical benchmark, against empirical observations.. While frontier architectures, notably Anthropic Claude 4.6, OpenAI GPT 5.4, and DeepSeek V4, have definitively surpassed human baselines, in isolated synthetic benchmarks, their outcome within enterprise production environments reveals severe problems, confirming our past concerns and predictions. The initial perception of hyper accelerated code generation velocity, at this stage, widely publicly believed, is significantly counterbalanced by the Great Toil Shift, a phenomenon wherein the temporal savings of algorithmic syntax authoring are entirely consumed by the downstream burdens of architectural review, security auditing, code understanding/documentation, and continuous support and maintenance. Efficiency gains are not what they seem.This paper identifies unprecedented surges in cyclomatic complexity, dynamic security vulnerabilities, and cognitive debt. Furthermore, the analysis identify the severe human toll associated with unrestricted artificial intelligence adoption. Driven by the relentless need to audit stochastic algorithmic outputs, human operators are increasingly suffering from AI Brain Fry, defined as acute mental fatigue resulting from the cognitive overload of continuous algorithmic oversight. This psychological degradation directly catalyzes the proliferation of coding Work Slop, wherein low quality, verbose, and structurally deficient code masquerades as competent engineering, actively destroying the structural integrity of the enterprise application architecture. It seems that this problem will only grow as LLMs evolve.Ultimately, this paper concludes that while algorithmic systems have altered the velocity and division of technical labor, long term codebase viability remains strictly dependent of senior engineering oversight. Senior developers, QA can’t just be replaced by junior developers and AI.Or, to mitigate these systemic regressions, this paper posits that traditional human and artificial intelligence collaborative paradigms, including unconstrained vibe coding, are fundamentally unsustainable. Instead, the industry must transition toward application aware agentic artificial intelligence platforms. By leveraging dynamic temporal graph memory, and rigorous threat modeling frameworks, these deterministic platforms constrain stochastic generation, enforcing strict SDLC governance autonomously.
Category: Artificial Intelligence

[2] ai.viXra.org:2604.0008 [pdf] submitted on 2026-04-02 11:23:07

AI-Generated Figures in Academic Publishing: Policies, Tools, and Practical Guidelines

Authors: Davie Chen
Comments: 15 Pages.

Generative artificial intelligence (AI) has created new possibilities for producing scientific figures, graphical abstracts, and conceptual diagrams at substantially lower time and skill cost. At the same time, publishers and journals have introduced heterogeneous policies governing the disclosure and acceptability of AI-generated imagery, leaving researchers with limited operational guidance. In this paper, we conduct a structured review of editorial policies from 12 major publishers and journals current to January 2026, analyze the principal concerns motivating these policies, and compare representative figure-generation tools for academic use. As an illustrative case, we examine SciDraw, a domain-specific platform for scientific illustration available at https://sci-draw.com. Our analysis indicates that publisher guidance converges on three requirements: transparent disclosure, retained human accountability, and heightened scrutiny for figures that could be mistaken for primary data. On this basis, we propose a practical framework for compliant use centered on provenance recording, figure-level disclosure, and post-generation expert review. We argue that AI-assisted figure generation is most defensible when limited to schematic and communicative visuals, accompanied by reproducibility metadata, and explicitly separated from evidentiary data figures.
Category: Artificial Intelligence

[1] ai.viXra.org:2604.0007 [pdf] submitted on 2026-04-02 13:36:33

Vibe-Coding and SDLC Constrained And Managed By An Application-Aware AI-Like Agentic Platform

Authors: Stephane H Maes
Comments: 31 Pages. All related details of the projects (and updates) can be found and followed at https://shmaes.wordpress.com/

The contemporary enterprise software environment is defined by a critical market failure known as the Deployment Paradox. Despite unprecedented capital allocation toward Generative AI infrastructure, a vast majority of enterprise AI pilots fail to graduate to production environments or deliver measurable financial returns. A non-negligible contributor to this failure is the less than stellar outcome from the adoption of Ai assistant and vibe coding, a development paradigm utilizing natural language prompts to generate software autonomously. While vibe coding compresses software development cycles, it introduces new challenges in explainability, security, maintenance and support. Also, it operates at a low granularity of intent. It also increases code volume, with limited to no focus over architectural integrity. Despite grandiose expectations, developers often spend the same or more time developing and maintaining, and enterprises have to hire new people, to compensate for those who were let go. Indeed, the traditionally recommended mitigation strategy involves applying rigorous Software Development Life Cycle practices, e.g., DevOps, Agile methodologies, to AI generated code snippets. This manual intervention negates the velocity benefits of AI coding and traps organizations in endless integration cycles. This paper proposes a paradigm shift towards using an agentic platform to autonomously perform the AI/vibe coding based on high level intent conversations with a meta-agent and a model the constraints derived on an Application Aware AI utilizing a Real Time Discovery and Coding engine. By deploying a meta agent that interacts with a developer agent within a platform managed lifecycle, enterprises can automate semantic verification and continuous optimization. This architecture leverages a deterministic model of constraints, transactional object memory (for reliability and rewind), and secure sandboxing to neutralize the inherent risks of probabilistic Large Language Models. We detail how this embedded agentic infrastructure addresses the limitations of vibe coding, ensuring secure, maintainable, and self evolving enterprise software systems capable of disrupting traditional enterprise applications.The application-aware AI agentic platform that we detail is based on Zenera offerings. Others can be considered as long if they follow principle enumerated in this paper of constrained vibe coding.
Category: Artificial Intelligence