Philosophy, Rationality
The End of Social Media - Michael Mignano | Medium
Shameless Samsung â Stratechery by Ben Thompson
Messaging: Mobileâs Killer App â Stratechery by Ben Thompson
Aggregation Theory â Stratechery by Ben Thompson
Snapchatâs Ladder â Stratechery by Ben Thompson
Facebook, Phones, and Phonebooks â Stratechery by Ben Thompson
Goodbye Gatekeepers â Stratechery by Ben Thompson
The Internet and the Third Estate â Stratechery by Ben Thompson
The TikTok War â Stratechery by Ben Thompson
Mistakes and Memes â Stratechery by Ben Thompson
Instagramâs Evolution â Stratechery by Ben Thompson
Metaverses â Stratechery by Ben Thompson
Stratechery by Ben Thompson â On the business, strategy, and impact of technology.
Asymptotic safety in quantum gravity - Wikipedia
Physics applications of asymptotically safe gravity - Wikipedia
Modified Newtonian dynamics - Wikipedia
(4) Venkatesh Rao âď¸ (@vgr) / Twitter
ribbonfarm â constructions in magical thinking
Ribbonfarm Studio | Venkatesh Rao | Substack
The Unreasonable Sufficiency of Protocols - Summer of Protocols
Computational Law, Symbolic Discourse and the AI ConstitutionâStephen Wolfram Writings
Multicomputation: A Fourth Paradigm for Theoretical ScienceâStephen Wolfram Writings
The Concept of the RuliadâStephen Wolfram Writings
Galactica: an AI trained on humanityâs scientific knowledge (by Meta) | Hacker News
Chief scientist of major corporation canât handle criticism of the work he hypes | Hacker News
Writing/the_double_edged_sword_of_AI.md at main ¡ Liu-Eroteme/Writing ¡ GitHub
The End of Programming | January 2023 | Communications of the ACM
Large Language Model: world models or surface statistics?
Scoring forecasts from the 2016 âExpert Survey on Progress in AIâ - EA Forum
Transcript: Ezra Klein Interviews Gary Marcus - The New York Times
What does it mean when an AI fails? A Reply to SlateStarCodexâs riff on Gary Marcus
The Road to AI We Can Trust | Gary Marcus | Substack
A reply to Michael Huemer on AI - Matthew Barnettâs Blog
Matthew Barnettâs Blog | Substack
Meditations On Moloch | Slate Star Codex
Raikoth: Laws, Language, and Society | Slate Star Codex
Searching For One-Sided Tradeoffs | Slate Star Codex
Archipelago and Atomic Communitarianism | Slate Star Codex
Poor Folks Do SmileâŚFor Now | Slate Star Codex
GPT-2 As Step Toward General Intelligence | Slate Star Codex
The Aleph. Borgean fantastic hyperreality⌠| by The Sandbook | Medium
Mechanical Sympathy: Understanding the Hardware Makes You a Better Developer - DZone
Evidential decision theory - Wikipedia
Now you can (try to) serve five terabytes, too
Crypto Firm Nomad Loses Nearly $200 Million in Bridge Hack - Bloomberg
Federated learning - Wikipedia
The Dirty Pipe Vulnerability â The Dirty Pipe Vulnerability documentation
CVE-2022-21449: Psychic Signatures in Java â Neil Madden
Deconstructing Deathism - Answering Objections to Immortality - ImmortalLife.net
Community Alert: Ronin Validators Compromised
Honey, I hacked the Empathy Machine!
Brandoliniâs law - Wikipedia
Apple, Meta Gave User Data to Hackers With Forged Legal Requests (AAPL, FB) - Bloomberg
Hackers Gaining Power of Subpoena Via Fake âEmergency Data Requestsâ â Krebs on Security
Uber apparently hacked by teen, employees thought it was a joke - The Verge
2020 Twitter account hijacking - Wikipedia
The Billion Dollar AI Problem That Just Keeps Scaling
1.1 - Fermi estimate of future training runs
Factored Cognition - AI Alignment Forum
The Toxoplasma Of Rage | Slate Star Codex
Sort By Controversial | Slate Star Codex
Target Hackers Broke in Via HVAC Company â Krebs on Security
Chinese Spies Hacked a Livestock App to Breach US State Networks | WIRED
harry,whg.eth đŚđ (@sniko_): âSupply chain attacksâ | nitter
China Has Already Reached Exascale â On Two Separate Systems
NYU Accidentally Exposed Military Code-breaking Computer Project to Entire Internet
Flatiron Institute - Wikipedia
Is Programmable Overhead Worth The Cost?
Extrapolating GPT-N performance - AI Alignment Forum
Computer Scientists Achieve âCrown Jewelâ of Cryptography | Quanta Magazine
Rapid Locomotion via Reinforcement Learning
Cerebro-cerebellar networks facilitate learning through feedback decoupling | bioRxiv
Experience curve effects - Wikipedia
Thread: Differentiable Self-organizing Systems
Growing Neural Cellular Automata
Adversarial Reprogramming of Neural Cellular Automata
The Future of Artificial Intelligence is Self-Organizing and Self-Assembling â Sebastian Risi
On Having No Head: Cognition throughout Biological Systems - PMC
Flying Fish and Aquarium Pets Yield Secrets of Evolution | Quanta Magazine
Synthetic living machines: A new window on life: iScience
Fundamental behaviors emerge from simulations of a living minimal cell: Cell
An Account of Electricity and the Body, Reviewed | The New Yorker
Is Bioelectricity the Key to Limb Regeneration? | The New Yorker
âAmazing scienceâ: researchers find xenobots can give rise to offspring | Science | The Guardian
A synthetic protein-level neural network in mammalian cells | bioRxiv
Cells Form Into âXenobotsâ on Their Own | Quanta Magazine
9 Missile Commanders Fired, Others Disciplined In Air Force Scandal : The Two-Way : NPR
Security troops on US nuclear missile base took LSD | AP News
Joan Rohlfing on how to avoid catastrophic nuclear blunders - 80,000 Hours
[D] Instances of (non-log) capability spikes or emergent behaviors in NNs? : mlscaling
In-context Learning and Induction Heads
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient | OpenReview
Robert Oppenheimer - Wikiquote
DeepMind and Google: the battle to control artificial intelligence | The Economist
Boosting Search Engines with Interactive Agents | OpenReview
Learning Robust Real-Time Cultural Transmission without Human Data
What Are Bayesian Neural Network Posteriors Really Like?
Recurrent Experience Replay in Distributed Reinforcement Learning | OpenReview
Microsoft researchers win ImageNet computer vision challenge - The AI Blog
A Recipe for Training Neural Networks
Solving (some) formal math olympiad problems
OpenAI Five defeats Dota 2 world champions
Scaling Laws for Language Transfer Learning
DALL¡E: Creating images from text
Fine-tuning GPT-2 from human preferences
Effect of scale on catastrophic forgetting in neural networks | OpenReview
Reward is enough - ScienceDirect
Rip van Winkleâs Razor, a Simple New Estimate for Adaptive Data Analysis â Off the convex path
experience curves tag ¡ Gwern.net
Why Tool AIs Want to Be Agent AIs ¡ Gwern.net
preference learning tag ¡ Gwern.net
Fully-Connected Neural Nets ¡ Gwern.net
Surprisingly Turing-Complete ¡ Gwern.net
How Many Computers Are In Your Computer? ¡ Gwern.net
Computer Optimization: Your Computer Is Faster Than You Think ¡ Gwern.net
economics/automation tag ¡ Gwern.net
Complexity no Bar to AI ¡ Gwern.net
cognitive biases/illusion-of-depth tag ¡ Gwern.net
inner monologue (AI) tag ¡ Gwern.net
preference learning tag ¡ Gwern.net
meta-learning tag ¡ Gwern.net
Technology Forecasting: The Garden of Forking Paths ¡ Gwern.net
On Seeing Through and Unseeing: The Hacker Mindset ¡ Gwern.net
Slowing Mooreâs Law: How It Could Happen ¡ Gwern.net
The Neural Net Tank Urban Legend ¡ Gwern.net
Evolution as Backstop for Reinforcement Learning ¡ Gwern.net
Fake Journal Club: Teaching Critical Reading ¡ Gwern.net
Why Do Hipsters Steal Stuff? ¡ Gwern.net
Machine Learning Scaling ¡ Gwern.net
The Scaling Hypothesis ¡ Gwern.net
GPT-3 Creative Fiction ¡ Gwern.net
40a93946b61c16a861bb5d277c89bdf07c507d09.pdf
[1806.11146] Adversarial Reprogramming of Neural Networks
080e52b3e827dd0c10a822c22935f62305ee1b8f.pdf
[1809.01829] Adversarial Reprogramming of Text Classification Neural Networks
Magna Alta Doctrina - LessWrong
The Brain as a Universal Learning Machine - LessWrong
Bing Chat is blatantly, aggressively misaligned - LessWrong
Mooreâs Law, AI, and the pace of progress - LessWrong
Proposal: Scaling laws for RL generalization - LessWrong
Raising the Sanity Waterline - LessWrong
The Brain as a Universal Learning Machine - LessWrong
Matt Botvinick on the spontaneous emergence of learning algorithms - LessWrong
Truthful and honest AI - LessWrong
But is it really in Rome? An investigation of the ROME model editing technique - LessWrong
A Mechanistic Interpretability Analysis of Grokking - LessWrong
Critique of some recent philosophy of LLMsâ minds - LessWrong
An Equilibrium of No Free Energy - LessWrong
MIRI announces new âDeath With Dignityâ strategy - LessWrong
Optimal Employment - LessWrong
Orthogonality Thesis - Arbital
Instrumental convergence - Arbital
Letâs See You Write That Corrigibility Tag - AI Alignment Forum
AGI Ruin: A List of Lethalities - AI Alignment Forum
Where I agree and disagree with Eliezer - AI Alignment Forum
Some of my disagreements with List of Lethalities - AI Alignment Forum
âPivotal Actâ Intentions: Negative Consequences and Fallacious Arguments - AI Alignment Forum
(My understanding of) What Everyone in Technical Alignment is Doing and Why - AI Alignment Forum
ARCâs first technical report: Eliciting Latent Knowledge - AI Alignment Forum
Mundane solutions to exotic problems - AI Alignment Forum
Optimization daemons - Arbital
Oversight Misses 100% of Thoughts The AI Does Not Think - AI Alignment Forum
The Main Sources of AI Risk? - AI Alignment Forum
Distinguishing AI takeover scenarios - AI Alignment Forum
My Overview of the AI Alignment Landscape: Threat Models - AI Alignment Forum
What does it take to defend the world against out-of-control AGIs? - AI Alignment Forum
My research methodology - AI Alignment Forum
Testing The Natural Abstraction Hypothesis: Project Update - AI Alignment Forum
Basic Foundations for Agent Models - AI Alignment Forum
Gears Which Turn The World - AI Alignment Forum
Cartesian Frames - AI Alignment Forum
Finite Factored Sets - AI Alignment Forum
The ground of optimization - AI Alignment Forum
Stuart_Armstrong - AI Alignment Forum
Intro to Brain-Like-AGI Safety - AI Alignment Forum
Epistemic Cookbook for Alignment - AI Alignment Forum
Productive Mistakes, Not Perfect Answers - AI Alignment Forum
Epistemological Vigilance for Alignment - AI Alignment Forum
Why Agent Foundations? An Overly Abstract Explanation - AI Alignment Forum
Refining the Sharp Left Turn threat model, part 1: claims and mechanisms - AI Alignment Forum
Paradigms of AI alignment: components and enablers | Victoria Krakovna
Our approach to alignment research
The case for how and why AI might kill us all
How Iâm thinking about GPT-N - LessWrong
[2107.14795] Perceiver IO: A General Architecture for Structured Inputs & Outputs
[2008.02217] Hopfield Networks is All You Need
(93) Yann LeCun | May 18, 2021 | The Energy-Based Learning Model - YouTube
[2105.08050] Pay Attention to MLPs
Patches Are All You Need? | OpenReview
[2110.00476] ResNet strikes back: An improved training procedure in timm
The academic contribution to AI safety seems large - EA Forum
GPT-3: a disappointing paper - LessWrong
interpreting GPT: the logit lens - LessWrong
larger language models may disappoint you [or, an eternally unfinished draft] - LessWrong
âNon-Player Characterâ â Eliezer S. Yudkowsky
Train ChatGPT on Your Data - AlphaVenture Experiments
Discussion with Nate Soares on a key alignment difficulty - LessWrong
âCarefully Bootstrapped Alignmentâ is organizationally hard - LessWrong
GPTs are Predictors, not Imitators - LessWrong
Evolution provides no evidence for the sharp left turn - LessWrong
Scaffolded LLMs as natural language computers - LessWrong
Four mindset disagreements behind existential risk disagreements in ML - LessWrong