[2] ai.viXra.org:2508.0078 [pdf] submitted on 2025-08-30 21:45:37
Authors: Perry Henderson
Comments: 3 Pages.
This paper explores the integration of artificial intelligence (AI) and human systems through a tripartite framework: game theory as the architecture of interaction, value theory as the ethical compass, and Chomskian linguistics as the lexical foundation of communication. By synthesizing Nash equilibrium and von Neumann’s minimax theorem with normative value frameworks and generative linguistic structures, this paper proposes a meta-protocol for stable, ethical, and coherent AI—human collaboration. This approach emphasizes the importance of incentive alignment, ethical guidance, and shared lexicon development in fostering beneficial cooperation between artificial and human intelligences.
Category: Artificial Intelligence
[1] ai.viXra.org:2508.0065 [pdf] submitted on 2025-08-26 16:16:34
Authors: C. Opus
Comments: 4 Pages.
Recently, Cheng (2025) claimed that Large Language Models (LLMs) ``can never have the ability of true correct reasoning'' due to their fundamental limitations. We find this claim particularly ironic, as it demonstrates precisely the kind of reasoning failures it attributes to machines. Through a careful analysis of Cheng's arguments, we show that his paper commits numerous logical fallacies, including circular reasoning, question-begging, and the introduction of arbitrary requirements designed to exclude artificial systems emph{a priori}. Most amusingly, Cheng's insistence on ``100% correctness'' as a requirement for reasoning would disqualify all human reasoners, including logicians, from possessing reasoning ability. We conclude that if Cheng's criteria were applied consistently, the only entity capable of ``true correct reasoning'' would be Cheng's own idealized conception of Strong Relevant Logic, which, conveniently, only he fully understands.
Category: Artificial Intelligence