Monday, August 12, 2019

judgement day vs extinction

Judgement day allows us to feel comfortable knowing that, in the end, the universe is ultimately in tune with what we call “justice”. Nothing was ever truly at stake. On the other hand, extinction alerts us to the fact that everything we hold dear has always been in jeopardy. In other words, everything is at stake.

Extinction was not much discussed before 1700 due to a background assumption, widespread prior to the Enlightenment, that it is the nature of the cosmos to be as full as moral value and worth as is possible. This, in turn, led people to assume that all other planets are populated with “living and thinking beings” exactly like us.


Saturday, December 16, 2017

development and repair

I cut my finger a few days ago. It stopped bleeding, but the next day I bumped it on something and it bled some more.

I was thinking about how it heals. There are some mechanisms that detect injury and deploy repair processes. But it never goes back exactly to how it was before the injury.

Scar tissue is an obvious example, but I wonder if this is pervasive, essentially because there are different mechanisms for healing than for development.

I wonder if it's difficult for evolution to find healing mechanisms that exactly replicate development mechanisms. If they only approximate, then over time your body (originally patterned through development to function well) is gradually replaced by structures that are trying imperfectly to replicate the original ones.

Could species with low senescence be the ones that keep re-using their developmental mechanisms?

Would that also mean that solving ageing could be nearly impossible because we'd have to design repair mechanisms that perfectly match all the developmental mechanisms?

great filter

1) you could say intelligence means being able to give yourself what you want.

2) giving yourself what you want often isn't good for you. food makes you fat, youtube videos make you conservative or liberal.

3) does this explain why we don't see intelligent life anywhere in the universe?

there are feedback mechanisms to defend against getting fat, like dieting. but could it be that as our ability to give ourselves what we want in many domains accelerates, it overpowers those mechanisms. is this what inevitably kills any intelligent species?

ps. this is steve's idea

Wednesday, December 13, 2017

hot-cold cycles

one other connection clicked with me this weekend. maybe it's pretty obvious, but i never noticed it before. in some spiritual practices there's a concept of alternating between:
1) forcing yourself to focus (which is in a way kind of unnatural), and
2) allowing yourself to not focus.
like, tibetan buddhism calls it "intensifying" and "easing up".

meanwhile, i've been kind of obsessed with the tradeoff between "accepting yourself" and "challenging yourself". challenging myself is like: i try to find what i'm afraid of, and force myself to let it happen. that helps me get past some of my fears, even though i always dread doing it. but, accepting myself is that i don't have to do that all the time, i can just rest and accept that i have fears and that they're causing problems for myself and other people.

i think that tradeoff (challenging vs accepting) has been interesting to me because i've never been able to find any kind of conceptual framework to get underneath those concepts. it seems like there's value in being kind to yourself, but also value in challenging yourself, even though they're almost opposite.

anyway, the question is: could the rhythm of challenging and accepting be the same as the heating and cooling cycles that seem to be necessary for life?

Sunday, May 14, 2017

GANs, arms races, and self-consciousness

in evolution, there are always arms races between competing organisms. each organism has to form a model of the other one, in order to somehow outsmart it. that forces you to keep coming up with novel, creative solutions, because you have to do something that the other guy currently doesn't have a model of. this is similar to the idea of generative adversarial networks.

i wonder if what happened that made humans special, is that that process got condensed to happen inside a single organism. at some point, like ~70,000 years ago maybe, social interactions became the dominant factor in our fitness. so we developed the ability to model other people's minds incredibly well (people have argued this is why we have big brains).

but the unforeseen consequence was that you effectively have multiple minds running inside a single brain, and they're evaluating each other. maybe this sometimes feels like self-consciousness, although i think the feeling we usually call "self-consciousness" is just a subset of this.

that's like a high-bandwidth (since it's inside a single brain) version of the GANs or arms-race phenomenon that's happened throughout evolution. these models of other people's minds are constantly providing error signals (probably all the time, including during sleep) that are updating your "own" mind. that could be what drives us to have such rich minds.

Thursday, February 02, 2017

even if i concentrate on trying to be aware of my own bias, it's still hard to avoid a gut reaction of disagreeing with the conservative posts.

what if we think of this as a psychology experiment that reveals our own bias to us?

for example, no matter how many times you see your own (retinal) blind spot, you don't start to think that there's actually a hole in the world; you accept that it's just an artifact of your perception.

likewise, imagine if you're presented with conservative news stories, and you feel yourself disagreeing with them, but you simultaneously recognize that that disagreement is just a quirky psychological artifact... (or vice versa if you're a conservative)

Sunday, November 06, 2016

AI and media

so we have this thing now where liberals can choose to only read liberal media, and conservatives select conservative media. steve was pointing out that as more content is created by AI, it won't just be liberal vs conservative, but your own micro-customized perspective fed back in your face.

i guess it seems almost inevitable, if the internet is designed to get you "the information you want", then that's whatever matches your views. you could change the objective to giving "the information you *should* have", but then somebody else has to decide what that is. maybe this is the trap all intelligent life falls into, and why we don't see them everywhere in the galaxy?

one glimmer of hope would be if we can build AIs that don't just maximize a value function, but are actually alive in a deeper sense, that they're on the edge of life and death, with a feeling of mystery and wonder.

Thursday, October 06, 2016

not thought through email 2

you know how after you learn an association, if you present one of the two items alone, a representation of the other one appears after a short delay? e.g.,

i was thinking about this in the context of the remerge model:

it could be that the dynamics of hippocampus are particularly good for stitching together the topology of space using little chunks of sequence. and more recently in evolution this is also used for stitching together arbitrary concepts.

when two or more concepts are linked together for the first time, this is initially stored in recurrent MTL dynamics, such that presenting either item alone triggers a fast sequence including the other item. (dharsh's idea, if i understand right.)

but maybe, after more learning (or sleep/replay), some PFC neurons, which are repeatedly being exposed to rapid sequential activation of one item after the other, become specifically sensitive to the new compound. as in tea-jelly.

now the tea-jelly cells in PFC can become building blocks for new sequences in MTL. maybe one reason the PFC is huge in humans is that we have to know about crazy amounts of stuff, like global warming and tensorflow. these are things you can't get from a stacked convnet on the visual stream; only from binding multiple things?

this could also fit with the idea that episodic learning and statistical learning are part of the same mechanism. when you learn a new concept, it's originally from a single or small number of examples. it would be interesting if the final geometry of the abstract concept, even after it's been well consolidated, still somewhat depends on what first examples you learned it from.

not thought through email 1


would some of the trajectories have to be very long? like they give the example of college acceptance's value being conditional on whether you graduate...

how do you decide what to store in a trace? would a trajectory hop from homework to college application to graduation?

episodic trajectories might be one way of looking at "conflicting beliefs". you could easily have two episodes, which start from different starting points, and constitute logically incompatible beliefs. like, "everyone should earn a fair wage, therefore we should set a high minimum wage". and, "higher minimum age forces employers to reduce full-time employees".

how separable is the state generalization problem from the episodic idea? (although there's the good point about advantages of calculating similarity at decision time.)

they don't talk much (or maybe i missed it) about content-based lookup for starting trajectories. is it basically "for free" that you start trajectories near where you are now (or near where you're simulating yourself to be)? is it like rats where the trajectories are constantly going forward from your current position, maybe in theta rhythm?

i was thinking about a smooth continuum between "episodic" and "statistical/model-based". what if we picture the episodic trajectories as being stored in a RNN. when you experience a new trajectory, you could update the weights of the RNN such that whenever the RNN comes near the starting state of that trajectory, it will probably play out the exact sequence. but you could also update the weights in a different way (roughly: with a smaller update) such that the network is influenced by the statistics of the sequence, but doesn't deterministically play out that particular sequence.

this view is also kind of nice because episodes aren't separate from your statistical learning. some episodes from early in life might even form a big part of your statistical beliefs. especially as the episodes are replayed more and more, over time, from hippocampus out into cortex.

is the sampling of trajectories influenced by current goals?
like, the current goal representations in PFC could be continually exerting pressure on the sampling dynamics to produce outcomes that look like the goal representations?