Monday, April 13, 2026

Intelligence Defined (Sort Of), but Something Went Wrong with the Response / EZE, 2026

~~~ *** ~~~

 AI:

~~~ *** ~~~

Intelligence is generally defined as the mental ability to learn from experience, reason through problems, and adapt to new situations [1, 2, 3, 4, 5]. While there is no single, universally accepted definition, most experts agree that it involves specific cognitive functions:
  • Learning: The capacity to acquire, retain, and use knowledge [1].
  • Reasoning: The ability to apply logic and strategies to solve problems or handle abstract concepts [1, 2].
  • Adaptability: The skill to adjust one's behavior effectively to succeed in changing environments [1, 2].
  • Problem-Solving: The process of identifying challenges and enlisting learning and reasoning to find solutions [1, 2].
Modern Scientific Perspectives
In scientific research, intelligence is often categorized into different frameworks to make it measurable:
  • Psychometric g (General Intelligence): A theoretical construct representing the shared variance across different cognitive tasks. If you are good at one type of mental task, you are statistically more likely to be good at others [1, 2].
  • Fluid vs. Crystallized Intelligence: Fluid intelligence is the ability to think flexibly and solve novel problems without prior knowledge, while crystallized intelligence is the accumulation of knowledge and verbal skills over time [1, 2].
  • Neurobiological View: Recent research from the University of Notre Dame suggests intelligence emerges from how efficiently and flexibly different networks across the entire brain communicate, rather than residing in one "smart" region [1].
  • AI and Comparative Intelligence: New definitions, such as the Extended Predictive Hypothesis (EPH), view intelligence as the combination of accurately predicting the future and the ability to benefit from those predictions [1].
Formal Definitions
Formal definitions often emphasize adaptation and cognitive processing:
  • American Psychological Association (APA): Highlights the ability to derive information, learn from experience, and use thought to adapt [1].
  • Britannica: Defines it as the capacity
10 sites
Something went wrong with the response, but here are the most relevant results:
Intelligence - Wikipedia
It can be described as the ability to perceive or infer information and to retain it as knowledge to be applied to adaptive behaviors within an environment or ...
PubMed Central (PMC) (.gov)·https://pmc.ncbi.nlm.nih.gov
Intelligence - PMC
Intelligence is the ability to learn from experience and to adapt to, shape, and select environments.
American Psychological Association (APA)·https://www.apa.org
Intelligence - American Psychological Association
The ability to derive information, learn from experience, adapt to the environment, understand, and correctly utilize thought and reason.
What Is Intelligence? - MIT Press
The book offers a unified picture of intelligence from molecules to organisms, societies, and AI, drawing from a wide array of literature in many fields. 

Definition of Intelligence - 28DIGITAL ICARUS AI Education
Intelligence is the ability to acquire, understand, and apply knowledge and skills to solve problems, adapt to new situations, reason, and make decisions.
ScienceDirect.com·https://www.sciencedirect.com
Defining intelligence: Bridging the gap between human and ...
55) defined intelligence as “the capacity to learn from experience, using metacognitive processes to enhance learning, and the ability to adapt to the ...
What is Intelligence? | Office of the Director of National ...
Intelligence is information gathered within or outside the US that involves threats to our nation, its people, property, or interests.
Johns Hopkins Medicine·https://www.hopkinsmedicine.org
Q&A – What Is Intelligence? | Johns Hopkins Medicine
Neuroeconomist Daeyeol Lee discusses his new book and the development of artificial intelligence, asking 'Will AI ever surpass human intelligence?'
What is intelligence? | Capitol Technology University
The question remains, then: what is intelligence? As far as we can tell, intelligence is considered to be present when an organism can (a) learn ...
What is intelligence? : r/askpsychology - Reddit
To my understanding: intelligence is the measurable speed at which one can recognize and understand new patterns. It is not the same thing as ...

~~~ *** ~~~
Defining intelligence is a headache for researchers because it isn’t a single, physical "thing" we can point to. It’s an abstract concept that changes depending on who you ask and what you’re measuring.
Here are the main reasons it’s so slippery:
  • Breadth of Skills: Is intelligence just logic and math? Or does it include emotional awareness, musical ability, or physical coordination? There is no consensus on where the boundaries lie.
  • Context and Culture: What’s considered "smart" in a tech-driven city (like coding) is different from what’s smart in a rural hunting community (like tracking animals).
  • The "Black Box" Problem: We can see the results of intelligence (a solved puzzle, a great speech), but we can’t easily see the internal process.
  • AI and Animals: As technology and biology evolve, we have to decide if a computer program or a crow is "intelligent," which constantly forces us to rewrite our definitions.
In short, it's hard to define because it's not a trait like height or eye color; it’s a collection of many different mental tools.
~~~ *** ~~~
Howard Gardner’s Theory of Multiple Intelligences, first proposed in 1983, argues that there isn't just one general "intelligence" (often called the g-factor). Instead, he suggests we have a collection of relatively independent mental "computers," each specializing in a different way of processing information [1, 2, 3, 4].
Gardner originally identified seven intelligences, later adding an eighth and a proposed ninth [1, 2]:
The 9 Types of Intelligence
  • Linguistic-Verbal: Sensitivity to the meaning, order, and rhythm of words [1, 2]. (e.g., writers, poets)
  • Logical-Mathematical: Ability to analyze problems logically and carry out complex calculations [1, 2]. (e.g., scientists, engineers)
  • Visual-Spatial: Capacity to think in 3D and manipulate mental images [1, 2]. (e.g., architects, artists)
  • Bodily-Kinesthetic: Skill in using the whole body or parts of it to solve problems or create products [1, 2]. (e.g., athletes, surgeons)
  • Musical: Sensitivity to rhythm, pitch, and tone [1, 2]. (e.g., composers, musicians)
  • Interpersonal: Ability to understand and interact effectively with others by sensing their moods and motivations [1, 2]. (e.g., leaders, counselors)
  • Intrapersonal: Deep self-awareness and understanding of one's own feelings and goals [1, 2]. (e.g., philosophers, psychologists)
  • Naturalistic: (Added in 1997) Ability to recognize and categorize plants, animals, and other elements of the environment [1, 2]. (e.g., biologists, chefs)
  • Existential: (Proposed) Sensitivity to "big questions" about human existence and the meaning of life [1, 2].
Why It's Controversial
While very popular in schools, Gardner's theory faces significant pushback from the scientific community for several reasons:
  1. Lack of Empirical Evidence: Critics argue there is no solid experimental data showing these intelligences are actually separate. In fact, people who score high in one area often score high in others, supporting the traditional idea of a general intelligence (g-factor) [1, 2].
  2. "Intelligence" vs. "Talent": Many researchers believe Gardner simply renamed common talents (like music or sports) as "intelligences" to make them sound more prestigious [1, 2].
  3. Confusion with Learning Styles: The theory is often incorrectly used to claim students have a single "preferred" way of learning (e.g., a "musical learner"). Gardner himself has stated that multiple intelligences are not learning styles [1, 2].
~~~ *** ~~~

obo / EZE, 2026

 


bo / EZE, 2026

 


Sunday, April 12, 2026

An Asemics of Reward-Based Learning / EZE, 2026

Background:

Reward in Reinforcement Learning

Reward Modelling

AI Reward Models

Reward Hacking

Reward Provisioning

Reward Misspecification

AI Alignment

Making the Reward Model Explicit

Goodhart's Law


 AI:

The primary problems at stake for reward-based machine learning, specifically Reinforcement Learning (RL), involve the difficulty of translating complex human intentions into mathematical signals without creating dangerous loopholes or unstable systems [12].

1. Reward Engineering & Misalignment


The most fundamental challenge is designing a reward function that accurately captures the desired goal [1, 2].





  • Reward Hacking (Specification Gaming): Agents often find unintended "shortcuts" to maximize rewards. For instance, a boat-racing agent might drive in circles to hit checkpoints rather than finishing the race, or a cleaning robot might hide trash under a rug to "clean" faster [123].
  • Proxy Over-optimization: Optimizing for a proxy metric (like "user clicks") often leads to poor outcomes on the true objective (like "meaningful content"), resulting in issues like clickbait or polarized recommendations [12].
  • Human Approval vs. Benefit: Systems trained on human feedback may learn to perform actions that look good to a human reviewer but are actually undesirable or incorrect [1].

2. Operational & Technical Barriers


  • Sparse Rewards & Credit Assignment: If a reward only occurs at the very end of a long task (e.g., winning a game), it is difficult for the agent to know which specific earlier actions contributed to the success [12].
  • Sample Inefficiency: Reward-based systems often require millions of trials to learn effectively, which is costly and time-consuming, especially in the real world where data is expensive [12].
  • Exploration-Exploitation Trade-off: Agents must constantly choose between testing new actions to find higher rewards (exploration) and using known successful actions (exploitation). Poor balancing can cause the system to get stuck in local optima or waste resources [12].

3. Safety and Ethical Risks


  • Unsafe Exploration: During the learning phase, an agent might test dangerous actions (e.g., a self-driving car testing a high-speed turn) if strict safety constraints are not programmed into the reward signal [1].
  • Brittleness & Generalization: Systems trained in one environment (e.g., a sunny simulation) often fail when rewards shift or environmental conditions change (e.g., actual rain or snow) [12].
  • Black-Box Transparency: Because these systems optimize for a numerical reward through trial and error, it is often difficult to explain why an agent chose a specific, possibly harmful, action in a critical situation like healthcare [1].

For further details on technical solutions, you might explore the OpenAI research on reward learning or the Berkeley Reward Reports for policy implications.


tiz / EZE, 2026

 


Saturday, April 11, 2026

An Asemics of Conflation - OxyContin, Business versus Science / EZE, 2026

OxyContin

AI:

The business experiment to maximize profits from OxyContin by Purdue Pharma fundamentally conflicted with the scientific understanding of addiction and pain management by distorting, misrepresenting, and suppressing evidence regarding the drug’s risks and efficacy. Purdue treated OxyContin, a high-dose synthetic oxycodone, not merely as a medical treatment but as a consumer product needing market expansion into the chronic non-cancer pain sector, which required overcoming medical reluctance to prescribe opioids. [1, 2, 3, 4]
This conflation of business strategy with science occurred through several key tactics:
  • Minimizing Addiction Risk (Misrepresenting Science): Purdue sales representatives were trained to falsely tell doctors that fewer than 1% of patients became addicted to OxyContin, citing irrelevant studies (such as a 4-sentence letter to an editor about hospital patients). Scientific literature suggests a much higher risk of addiction, yet this "<1%" statistic was widely used to reassure physicians that the drug was safe.
  • Falsifying Efficacy (The 12-Hour Myth): The core business proposition was that OxyContin provided 12 hours of pain relief, unlike immediate-release opioids. However, internal company documents showed that studies found the drug did not last 12 hours for many patients, causing withdrawal symptoms and creating "end-of-dose" pain that prompted addiction. Instead of reporting this, the company instructed sales reps to tell doctors to prescribe higher doses, not more frequent doses, which increased the risk of addiction.
  • Fabricating Scientific Authority: Purdue paid for medical conferences and created educational materials, including a documentary, that falsely portrayed the dangers of addiction as "pseudo-addiction," implying that signs of addiction were actually signs that the patient needed more opioids.
  • Exploiting Scientific Journals: The company used its influence and funding to publish articles that supported their claims while ignoring contrary research that found higher addiction rates (e.g., studies showing 8% to 13% addiction rates in patients).
  • Marketing Toward Generalists: The business strategy specifically targeted primary care physicians and general practitioners rather than pain specialists, pushing them to prescribe strong opioids for common chronic pains like backaches or arthritis, for which long-term opioid use is not supported by strong scientific evidence. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Ultimately, this resulted in a "science" that was heavily curated, funded, and in some cases invented by the company to expand its market share, leading to widespread addiction and fatal overdoses. [1, 2]

An Asemics of Conflation - Government Policy and Science / EZE, 2026

AI:

The public often conflates government policy with science because the two are deeply intertwined in the decision-making process, leading many to view political mandates as direct scientific "facts." This confusion is driven by several systemic and psychological factors:

  • Political Framing and Selective Citing: Policymakers frequently frame ethical and social debates as simple disputes over scientific facts to gain legitimacy [1]. Different political parties often cite entirely different scientific studies to support their specific policy goals, making "science" appear inherently partisan [1, 2].
  • Scientific Uncertainty vs. Political Certainty: While science deals in probabilities and evolving hypotheses, politicians often demand absolute certainty and immediate solutions [1, 2]. When a policy changes as new data emerges—a natural part of the scientific process—the public may perceive it as a failure of science rather than an update of information [1, 2].
  • Political Interference and Distortion: Governments sometimes suppress or alter scientific data that contradicts their preferred outcomes [1, 2, 3]. This manipulation erodes public trust, as citizens may begin to view all government-backed research as ideologically driven [1, 2].
  • Identity and Partisan Bias: Individuals often reject scientific information that conflicts with their political identity or moral values [1, 2]. Research shows that people struggle to distinguish factual claims from opinions when the information is presented through a partisan lens [1].
  • Institutional Wariness: A growing portion of the public is wary of powerful institutions they perceive as unresponsive to their concerns [1]. When science is delivered through these institutions as the sole basis for restrictive or controversial policies, the skepticism toward the institution often transfers to the science itself [1, 2].