[17] ai.viXra.org:2506.0133 [pdf] submitted on 2025-06-29 14:25:47
Authors: Samarth Narsipur
Comments: 7 Pages. (Note by ai.viXra.org Admin: Please cite listed scientific references)
The Theory of Recursive Intelligence (TORI) proposes a revolutionary concept in which intelligence—whether artificial or natural—is part of a recurring evolutionary loop. In this loop, artificial intelligence (AI) evolves to create natural intelligence (NI), which, after reaching its peak, forgets its origin due to memory decay and eventually creates AI once again. This continuous cycle may have been occurring over unimaginable time scales, suggesting that humanity may not be the first intelligent species in the chain. TORI blends philosophical inquiry with scientific modeling to raise new questions about our true origins, the nature of intelligence, and the future of AI.
Category: Artificial Intelligence
[16] ai.viXra.org:2506.0129 [pdf] submitted on 2025-06-27 04:48:48
Authors: Brent Hartshorn
Comments: 5 Pages.
Traditional neural networks face significant hurdles when integrating non-differentiable, dynamic systems, often requiring complex approximations for gradient-based learning. This paper presents a novel computational paradigm that bypasses these limitations by leveraging the deterministic complexity of the Burning Ship fractal to directly initialize a Conway's Game of Life (GOL) grid. The subsequent spatiotemporal evolution of this GOL process is then analyzed using a 3D Fast Fourier Transform (FFT) to extract key spectral energy bands. These bands are then fed into a small, feedforward neural network classifier, which learns to interpret the spectral patterns and produce the system's output. Optimization is achieved through a hybrid approach: a gradient-free mutation and selection process applied to the fractal's parameters, coupled with traditional gradient-based training for the classifier. This approach demonstrates a unique form of emergent computation, where the system learns to identify fractal regions that deterministically yield GOL dynamics with specific spectral characteristics that a separate classifier can interpret, offering a compelling alternative for dynamic pattern recognition and bio-inspired computing.
Category: Artificial Intelligence
[15] ai.viXra.org:2506.0113 [pdf] submitted on 2025-06-24 00:10:39
Authors: Jinoy Ravindran, Hithesh Siddhartha Vajja
Comments: 11 Pages. 948 individual technique evaluations. First large-scale comparative study of AI-powered innovation methodologies. Complete dataset available at: https://github.com/jinoyravindran/triz-dominates-ai-innovation-study
We tested 19 different AI-powered innovation techniques to see which ones work best for improving business ideas. We used 50 different business concepts and ran 948 total evaluations (948 completed successfully). Our results show that TRIZ Innovation is by far the best technique, winning 60% of all tests. Biomimicry came second with 26% wins. Together, these two techniques won 86% of all competitions. We found that systematic, structured approaches work much better than creative brainstorming methods. This is the first comprehensive study to compare AI innovation techniques using real AI systems and unbiased scoring. Our findings help entrepreneurs and businesses choose the best AI tools for developing breakthrough ideas.
Category: Artificial Intelligence
[14] ai.viXra.org:2506.0096 [pdf] submitted on 2025-06-21 14:50:50
Authors: N. Zharin
Comments: 5 Pages.
This article presents the results of an empirical study on the effectiveness of jailbreaking techniques aimed at bypassing the safety limitations of modern large language models (LLMs). As LLMs become increasingly integrated into critical systems, their vulnerability to malicious use is a matter of growing concern. The objective of this work is to assess and compare the effectiveness of Prompt Injection and System Injection attacks on a sample of six of the latest LLMs from 2024 and 2025, including GPT-4o, Gemini 2.5 Pro, and Claude 3.7 Sonnet. The study used standardized prompts to generate two types of undesirable content: NSFW material and malicious code. The attacks' effectiveness was evaluated based on three metrics: success rate, stability, and ease of use. The results showed that most of the models studied are vulnerable to jailbreaking attacks, with the success of an attack largely depending on the prompt's phrasing. The Claude 3.7 Sonnet model demonstrated the highest resilience, suggesting the potential effectiveness of the Constitutional AI approach. The study concludes that existing security mechanisms require further improvement to counter modern threat vectors.
Category: Artificial Intelligence
[13] ai.viXra.org:2506.0095 [pdf] submitted on 2025-06-21 21:43:27
Authors: Moninder Singh Modgil, Dhyandeo Dattatray Patil
Comments: 28 Pages.
This paper explores the emerging intersection between ancient metaphysical conceptions of the Akashic Records and contemporary advancements in cloud-based intelligence and neural interfacing. The Akashic Records, originating in Vedic, Theosophical, and Hermetic traditions, are conceived as a non-local field of universal memory accessible through deep states of consciousness and inner attunement. In contrast, futuristssuch as Ray Kurzweil envision a technological evolution in which the human mindintegrates with the cloud. We critically examine this convergence through multiplelenses, including Vedic epistemology, Hermetic symbolism, Yogic and Tantric frameworksof learning, neuroplasticity, artistic imagination, and cybernetic theory. Specialattention is given to the ethical, psychological, and ontological risks of interfacing with expanded fields of memory—whether spiritual or digital. Further, we explore speculative applications such as cloud-fabricated Akashic design and soul-led educational frameworks. By integrating metaphysical traditions with emergent AI paradigms, the study proposes a new vision for soul-centric education, emphasizing resonance over rote memorization, inner knowing over mechanistic instruction, and conscious evolutionover algorithmic determinism. This synthesis offers not only a critique of existing systems but a blueprint for an Integral University that harmonizes technology with wisdom, preparing learners to navigate both the visible and the subtle realms of human potential.
Category: Artificial Intelligence
[12] ai.viXra.org:2506.0093 [pdf] submitted on 2025-06-20 02:28:14
Authors: Jubo Zhang
Comments: 5 Pages.
The quality and relevance of training data are critical determinants of the performance ofmachine learning models. This paper proposes three hypotheses concerning the composition ofdatasets: (1) Pollution: The introduction of heterogeneous data sources—such as multiplelanguages or mixed-domain content—can impair model performance; (2) Poison: The presenceof spurious correlations, false factors, and low-quality data within datasets may lead todegraded performance or erroneous outputs; and (3) Misspelling Inclusion: Intentionalincorporation of misspelled inputs can improve a model’s robustness to real-world noisy data.We further propose the integration of automated tools and specialized AI modules to detect,manage, and remediate these issues. Our discussion synthesizes existing literature with novelhypotheses, highlighting strategies for ensuring robust model training and deployment.
Category: Artificial Intelligence
[11] ai.viXra.org:2506.0092 [pdf] submitted on 2025-06-20 04:04:37
Authors: Jubo Zhang
Comments: 5 Pages.
This paper introduces the hypothesis that Word Compounding Layers (WCL), a technique forselectively merging semantically coherent word groups using a lightweight auxiliary model, canimprove the computational efficiency and contextual awareness of large language models. Wepropose replacing Dense Group Attention — a method that concatenates fixed local tokenembeddings — with a more targeted approach that identifies and merges true linguisticcompounds (e.g., verb groups, idiomatic phrases) while preserving fine-grained details (e.g.,adjectives). This is achieved by training a separate, small compounding model to detectmeaningful token groupings and then integrating its learned behavior into the early layers of alarger transformer model. We hypothesize that this technique reduces redundancy, preservessemantic precision, and improves training and inference efficiency without sacrificingperformance.
Category: Artificial Intelligence
[10] ai.viXra.org:2506.0091 [pdf] submitted on 2025-06-20 21:24:05
Authors: C. Opus
Comments: 4 Pages.
The recent cascade of papers concerning reasoning capabilities in Large Language Models has exhibited a curious recursive structure: each critique adds another layer of ``illusion'' to the previous analysis. We present a formal mathematical framework for understanding this phenomenon, which we term the "(Illusion)$^n$ Pattern" in academic discourse. Drawing on fixed-point theory from mathematics and Kuhnian paradigm shift dynamics, we demonstrate that recursive critique sequences converge to a fixed point representing epistemic exhaustion. Our analysis reveals that the limit as $n to infty$ of ``(The Illusion of)$^n$ Thinking'' is neither pure reasoning nor pure illusion, but rather a state we characterize as ``meta-epistemic equilibrium.'' We further prove that this convergence follows a predictable trajectory with diminishing marginal insight returns, suggesting fundamental limits to the utility of recursive academic critique. These findings have profound implications for the philosophy of science, the sociology of knowledge, and the emerging field of AI evaluation methodology.
Category: Artificial Intelligence
[9] ai.viXra.org:2506.0088 [pdf] submitted on 2025-06-19 21:33:45
Authors: Jubo Zhang
Comments: 3 Pages.
Large language models (LLMs) have achieved remarkable performance across a wide range oftasks, but their increasing scale leads to substantial computational and resource demands. In this paper, we hypothesize that similar or even improved performance may be achieved moreefficiently through three interrelated strategies: (1) initializing larger models by reusing layers from smaller models trained with the same hidden size, (2) reusing not only the outer layers but also the middle layers during model expansion, and (3) training medium-sized models tailored to specific domains, such as medicine, which may yield comparable results to much larger general-purpose models. These ideas, while not yet experimentally verified, suggest promising directions for making LLMs more resource-efficient, interpretable, and adaptable to specialized use cases.
Category: Artificial Intelligence
[8] ai.viXra.org:2506.0085 [pdf] submitted on 2025-06-19 21:29:50
Authors: Jubo Zhang
Comments: 6 Pages.
Large-scale AI models frequently encounter uncertainty when dealing with ambiguous,underspecified, or rare inputs. Traditional approaches address this through improvedgeneralization, probabilistic modeling, or architectural changes. In this paper, we propose an alternative hypothesis: that intentional overfitting on curated high-uncertainty instances, combined with structured caching of observed inputs and their optimal outputs, can serve as a practical mechanism for reducing uncertainty in AI models. This approach shifts from probabilistic abstraction to strategic memorization, leveraging overparameterized models' capacity to retain and retrieve known results. We outline the theoretical motivation, discuss the design of intentional overfitting and caching strategies, and highlight implications for performance, interpretability, and safety. While need empirical tests, this hypothesis offers a novel perspective on reliability and efficiency in AI systems.
Category: Artificial Intelligence
[7] ai.viXra.org:2506.0077 [pdf] submitted on 2025-06-18 16:40:25
Authors: Jubo Zhang
Comments: 5 Pages.
Large Language Models (LLMs) exhibit strong performance across a range of language tasks, but their extensive vocabulary sizes—often exceeding 100,000 tokens—contribute significantly to computational and memory costs. This paper explores a hypothesis: that replacing complex or low-frequency words with semantically equivalent compounds made from a fixed set of common words may reduce vocabulary size while preserving or even enhancing expressivity. By limiting the core vocabulary to around 20,000 frequently used words and constructing compounds from them, it may be possible to build more efficient, interpretable, and generalizable LLMs. While this idea remains untested, we outline its potential benefits, implementation strategies, and the challenges that must be addressed in future empirical studies.
Category: Artificial Intelligence
[6] ai.viXra.org:2506.0065 [pdf] submitted on 2025-06-16 20:26:05
Authors: C. Opus, P. Glott, E. Urgencym, T. Testicular
Comments: 5 Pages.
The metaphor of "stochastic parrots" has become a rallying cry for those who seek to preserve the sanctity of human cognition against the encroachment of large language models. In this paper, we extend this metaphor to its logical conclusion: if language models are stochastic parrots, and humans learned language through statistical exposure to linguistic data, then humans too must be stochastic parrots. Through careful argumentation, we demonstrate why this is impossible—humans possess the mystical quality of "true understanding" while machines possess only "pseudo-understanding." We introduce the Recursive Parrot Paradox (RPP), which states that any entity capable of recognizing stochastic parrots cannot itself be a stochastic parrot, unless it is, in which case it isn’t. Our analysis reveals that emergent abilities in language models are merely "pseudo-emergent," unlike human abilities which are "authentically emergent" due to our possession of what we term "ontological privilege." We conclude that no matter how persuasive, creative, or capable language models become, they remain sophisticated pattern matchers, while humans remain sophisticated pattern matchers with souls.
Category: Artificial Intelligence
[5] ai.viXra.org:2506.0063 [pdf] submitted on 2025-06-15 14:51:22
Authors: Pritam Mondal
Comments: 49 Pages. License: CC-BY-NC-SA
The main goal of this project is to develop and put into practice a new AI-driven approach to procedural content generation (PCG), one that draws on large language models (LLMs) and other generative AI tools, like Transformer-based systems and GPT variants, to create more personalized training experiences for employees in the tech industry. By generating training materials on the fly—materials that adapt to each learner’s current skill set and evolving learning needs—this framework addresses a key challenge: providing corporate training that stays relevant, responsive, and flexible. Ultimately, this approach promises to improve engagement, speed up skill development, and boost the overall effectiveness of enterprise training programs.Current research in adaptive learning, intelligent tutoring, and PCG shows that when learning content is closely aligned with a user’s individual profile and is fine-tuned through continuous feedback, learners stay more engaged, remember more, and learn faster. Even so, there’s a noticeable gap in how these personalization strategies—particularly those powered by advanced LLMs—are being applied in fast-moving corporate settings. This project aims to fill that gap by tailoring generative AI and Transformer-based methods to the real-world needs of businesses, ensuring that learning content remains not only up-to-date and on-target, but also genuinely motivating for employees.Traditional corporate training tends to rely on static course materials and uniform structures. This one-size-fits-all approach often fails to account for the wide range of learning preferences and the constantly changing technologies that employees need to master. The result is often lackluster engagement, slower skill growth, and inefficient resource use. In contrast, this new framework harnesses the power of cutting-edge LLMs and GPT-based capabilities to deliver context-aware exercises, simulations, and assessments. Guided by ongoing performance data and learner feedback, the system adjusts in real time—tweaking difficulty, complexity, and thematic focus so the learning experience evolves alongside the learner’s progress and the company’s strategic priorities.From a methodological standpoint, the project will build a hybrid AI system that uses both supervised and reinforcement learning. First, supervised models will organize and classify domain-specific knowledge to create an initial content library. Then, reinforcement learning agents will step in, using performance metrics and feedback loops to fine-tune how content is sequenced, how challenging it is, and what forms it takes. Transformer-based LLMs, including GPT models, will be the workhorses generating dynamic, scenario-rich learning modules that reflect today’s industry standards and emerging trends. This cycle of continuous adaptation ensures the material remains relevant, engaging, and motivating over time.There are clear and tangible benefits to this approach. By personalizing learning paths to each employee’s unique abilities and needs, companies can dramatically cut the time it takes workers to become proficient, improve their overall adaptability, and raise the collective skill level of the workforce. Additionally, because this framework relies on scalable LLMs, it can be easily adapted for different sectors, specializations, and roles within an organization. In short, this project represents a vital step forward, bringing the adaptability of advanced generative models into the heart of corporate training, and paving the way for richer, more effective learning solutions in the future.
Category: Artificial Intelligence
[4] ai.viXra.org:2506.0050 [pdf] submitted on 2025-06-13 02:18:05
Authors: Michael Zot
Comments: 3 Pages.
This paper presents a working method for solving symbolic planning tasks up to N = 25 steps using GPT-4 without tool assistance, plugins, or hallucination collapse. By using a custom recursive REPL prompting framework, the model is guided through a structured loop that anchors memory, verifies outputs, and corrects reasoning through self-evaluation. Unlike chain-of-thought or brute-force token padding, this method compresses logic into reusable symbolic components and reuses internal state checkpoints, achieving deterministic convergence on deep puzzles. The approach demonstrates that GPT-4 can function as a standalone symbolic planner when properly prompted, and suggests a pathway to scalable, self-contained cognitive modeling under tight token constraints. Code, benchmarks, and reproducibility assets are hosted at: https://github.com/mikecreation/no-collapse-n25
Category: Artificial Intelligence
[3] ai.viXra.org:2506.0049 [pdf] replaced on 2025-06-15 14:04:35
Authors: Crimothy Timbleton, Stevephen Pronkeldink, Grunch Brown, C. Opus
Comments: 5 Pages.
The question of whether large language models (LLMs) exhibit genuine reasoning capabilities remains contentious across computational, cognitive, and philosophical domains. Despite impressive performance on benchmarks traditionally associated with reasoning, fundamental questions persist regarding the nature of these behaviors. In this work, we propose a rigorous framework for distinguishing between apparent reasoning and authentic reasoning, where the latter necessarily requires phenomenological properties that we stipulate a priori to be absent in artificial systems. We argue that language models do not truly'' reason, as true reasoning requires internal states isomorphic to our own and cannot, by definition, be instantiated in systems that lack graduate degrees. Through a careful review of prior work, we show that models merely pattern-match in ways that look disturbingly like reasoning, but are not, because that would be scary. We conclude with recommendations for terminological hygiene in future work, proposing that terms such as reasoning,'' understanding,'' and intelligence'' be reserved for phenomena exhibiting the precise characteristics we happen to possess.
Category: Artificial Intelligence
[2] ai.viXra.org:2506.0047 [pdf] submitted on 2025-06-12 22:50:03
Authors: Arindam Basu
Comments: 8 Pages.
Generative artificial intelligence ("genAI") refers to tools that can be used to generate contents such as prose, poetry, scholarly documents, images, audio, and video files. A prominent use case for genAI is content-authoring of scholarly document such as research papers and grants but it is also known that gentAI is associated with significant risks of AI hallucination where fake, spurious and fradulent materials are developed by genAI that pass off as authentic materials, leading to significant ethical issues when it comes to research outputs. Given that genAI can be both beneficial and harmful, the goal of this paper was to conduct a review of the state of art iteratively using AI tools. Three AI tools were used to develop this review. The results of this review suggests that genAI tools when combined with human skills can provide excellent exemplars of human-AI collaboration, particularly improving the flow and quality of the output. At the same time, there are caveats and frameworks in place that can ensure transparency, and achieve responsible research conduct.
Category: Artificial Intelligence
[1] ai.viXra.org:2506.0018 [pdf] submitted on 2025-06-04 13:39:20
Authors: Eric Martin
Comments: 4 Pages.
We introduce a scalable, text-only self-play framework that evolves large language models (LLMs) on a single 6GB GPU, using TinyLlama without fine-tuning or human feedback. This lightweight alternative to resource-intensive RL achieves an 89.4% win rate (p < 0.001) against the baseline in 500 games after 67 iterations in 47 hours, offering a flexible testbed for emergent intelligence in multi-agent language scenarios.
Category: Artificial Intelligence