Posts
Showing posts from 2023
About me
This blog contains most of my public writing. If you'd like to talk with me, you can email me at my gmail address, which has the username: [first part of blog URL] + 'contact' I like to see behind things. I have many ideas for cool things to make, so if you want to make cool things I think of (example: hyperphone ), email me. If you want to make words you can go to lexicogenesis.zulipchat.com . If you're a person who has a streak of fanaticism for decreasing the probability that all human value is destroyed (by AGI), email me. If you're a woman and might want to go out with me, email me. I live in the Bay Area and want to have kids (preferably a lot) (with the right person). I'd be a great father. If you're a biologist or a linguist and are open to just shooting the shit, email me. (Or if you know a whole lot about something.) If you want to go hiking in the middle of the night, email me. If you want to go ice skating, email me. If you want to figure out...
Better debates
When two people disagree about a proposition even though they've thought about it alot, the disagreement is often hard to resolve. There's a gulf of data, concepts, intuitions, experiences, inferences . Some of this gulf has to be resolved by the two people individually trying to collate and present their own positions more clearly and legibly, so that they can build up concepts and propositions in whoever is receiving the model. Also, most new understanding comes from people working on their own or with others who are already synced up——for the most part they already agree on what and how to investigate, they have shared context of past experience and data, they agree on background assumptions, they have a shared language, they trust each other. But still, a lot of value comes from debate. The debaters are forced to make their evidence and logic legible. Ideas are tested against other ideas from another at least somewhat coherent perspective. Analogies and disanalogies are dr...
New Alignment Research Agenda: Massive Multiplayer Organism Oversight
When there's an AGI that's smarter than a human, how will we make sure it's not trying to kill us? The answer, in outline, is clear: we will watch the AGI's thoughts, and if it starts thinking about how to kill us, we will turn it off and then fix it so that it stops trying to kill us. 1. Limits of AI transparency There is a serious obstacle to this plan. Namely, the AGI will be very big and complicated, so it will be very difficult for us to watch all of its many thoughts. We don't know how to build structures made of large groups of humans that can process that much information to make good decisions. How can we overcome this obstacle? ML systems Current AI transparency methods are fundamentally limited by the size and richness of their model systems. To gain practical empirical experience today with modeling very large systems, we have to look to systems that are big and complex enough, with the full range of abstractions, to be analogous to future AGI syste...
הלבת-אש ללא הסנה
[This post is labeled בבל, meaning it's especially experimental. See: בבל disclaimer ] I seem to have always already lost my wife. I do wonder where she is. I assume she doesn't know where I am, or else she would have returned to me, although——not being able to imagine that she's dead or nonexistent or otherwise radically disempowered——I also eventually come to wonder if she's forsaken me, which choice I would naturally be required to have made myself enough apparently separate to pretend acceptance of, at least long enough for her to depart. Sadly I have also forgotten where she might be, and what she looks like, and worst of all, the sound of her voice murmuring something secret in my ear. I've even forgotten her name. Did it start with a J? Maybe an M? an A? Or was it a $\daleth$ or an Л? I don't remember. We can be quite sure it doesn't start with an $\aleph$, since she's kind and patient. She likes lemons and she likes the feel of rock on her sk...
Communicating with binaries and spectra
To communicate, it's convenient to code information in words and numbers. Words are discrete, so they're well-suited to expressing binaries: this is big, that is small. They're also well-suited to express finite partitions: microscopic, tiny, small, big, huge, enormous. Thought is often tripped up by finite partitions: many things do not fit neatly into the partitions, or what's relevant about something might be only poorly expressible with the available partitions. So instead an adjective can be taken as pointing at a spectrum. This is bigger, that is smaller. This is 10 meters long, that is 1 millimeter long. Thought can also be tripped up by spectra: again, what's relevant might be only poorly expressible as lying somewhere on the spectrum. What's relevant might be multidimensional, so that a one-dimensional representation requires a lossy projection. This weighs 2000 kg and is 10 meters long, that weighs 3 mg and is 1 millimeter long. A description could ...
Please don't throw your mind away
1. Dialogue 2. Synopsis 3. Highly theoretical justifications for having fun 4. Appendix: What is this "fun" you speak of? What's a circle? Hookwave Random smooth paths Mandala Water flowing uphill Guitar chamber Groups, but without closure Wet wall 1. Dialogue [Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler . That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.] Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following. ———————————————————— Tsvi: "So, what's been catching your eye about this stuff?" Arrival: "I think I want to work on machine learning, and see if I can contribute to align...
Rules for the flighty-souled
[This post is labeled בבל, meaning it's especially experimental. See: בבל disclaimer ] Never take your phone out while you're walking. Unless it's an emergency or you're going to take notes (but voice notes are preferable). Never take your phone out while you're with someone. Ever. Unless you explicitly take a break from being together. If they take their phone out first, it's less bad, but still never do it. Never wear clothing with words or images. Especially not logos or branded symbols. Accept as little money as possible. Never be in photos, ever. When you're having a long-distance call, make it audio only, not video. Avoid being inside cars. Never go on a dating app. Don't go on Facebook, Twitter, or other similar networks. Never say the same thing twice. Never speak to more than three people at once. Preferably, never speak to more than one person at once. Never touch anyone unless you mutually have some type ...
esc.
[This post is labeled בבל, meaning it's especially experimental. See: בבל disclaimer ] [Note February 2023: this is an unedited first draft written in April 2018 while I was heavily involved with a psychopath.] 180408 01:46:29 esc. summary: modalities attempt to mention a statement S, so to speak (by transforming it to a different statement [[M]] S), without using it. but the hidden effects of uttering S are often-roughly also caused by [[M]] S. thus S escapes its modality. speaking in a modality [[M]] attempts to transform a statement S into an object some other type, denoted [[M]] S (which can also be taken to be a statement, but in a different modality (as in, sensory modality)). some examples of modalities: [[M]] S may be: a string (the quotation modality; for example, "i was going to say [[']]i can't deal with this right now[[']]"); an emotion (e.g. "[[i feel like]] you are trying to hurt me"); a perception (e.g. "[[it seems to...
Wildfire of strategicness
It may not be feasible to make a mind that makes achievable many difficult goals in diverse domains, without the mind also itself having large and increasing effects on the world. That is, it may not be feasible to make a system that strongly possibilizes without strongly actualizing . But suppose that this is feasible, and there is a mind M that strongly possibilizes without strongly actualizing. What happens if some mental elements of M start to act strategically, selecting, across any available domain, actions predicted to push the long-term future toward some specific outcome? The growth of M is like a forest or prairie that accumulates dry grass and trees over time. At some point a spark ignites a wildfire that consumes all the accumulated matter. The spark of strategicness, if such a thing is possible, recruits the surrounding mental elements. Those surrounding mental elements, by hypothesis, make goals achievable. That means the wildfire can recruit these surrounding element...
An anthropomorphic AI dilemma
Either generally-human-level AI will work internally like humans work internally, or not. If generally-human-level AI works like humans, then takeoff can be very fast, because in silico minds that work like humans are very scalable. If generally-human-level AI does not work like humans, then intent alignment is hard because we can't use our familiarity with human minds to understand the implications of what the AI is thinking or to understand what the AI is trying to do.
The voyage of novelty
Novelty is understanding that is new to a mind, that doesn't readily correspond or translate to something already in the mind. We want AGI in order to understand stuff that we haven't yet understood. So we want a system that takes a voyage of novelty: a creative search progressively incorporating ideas and ways of thinking that we haven't seen before. A voyage of novelty is fraught: we don't understand the relationship between novelty and control within a mind.
"Sorry" and the originary concept of apology
1. Paradox of "apology" What does the word "apology" mean? Today it means "say you're sorry". In Ancient Greece, as people say, the etymon ἀπολογία meant "a speech made in defense of something", and this meaning can also attach to the English word "apology". Aren't these nearly exact opposites? Saying sorry is saying you did something wrong, and ἀπολογία is defending what you did, saying it's not wrong.
Verichtung
[This post is labeled בבל, meaning it's especially experimental. See: בבל disclaimer ] (Caveat lector: I only speak English and didn't run this by anyone.) Sonnendurchflutet Bäume über einem stillgelegt Steinbruch, Spiegel einander gegenüber, zu nah. Die geworfenen Würfel sind Schlangenaugen. ...איך להסביר לילד שכולם ימ Shield-toad left in the Haze, Gebröckelt Steineule auf der Hügelspitze. Unter der Unterfläche wartet der blaue Dynamo. Notes: "Verichtung" is a made-up word, patterned off " Vernichtung ", replacing "nicht" with the obsolete analogous word " icht ". "Icht" could maybe be viewed as "je-Wicht" (as in English "wight"), meaning something like "ever something", as opposed to "nicht" = "nie-Wicht" = "never something". So "Verichtung" would mean something like "be-something-ing, to make something be something, to make something...
בבל disclaimer
Here are the posts labeled "בבל": https://tsvibt.blogspot.com/search/label/%D7%91%D7%91%D7%9C Posts labeled "בבל" are more experimental, unreliable, poetic, prophetic, metaphoric, contradictory, false, confused, incoherent, unclear, inchoate, incontinent, insane, repetitive, low-effort, pointless, cringe, fringe, binge, silly, facile, babbling, rambling, squabbling, and any other manner of not to necessarily be taken too seriously, compared to other posts.