Dagens ord


Ansvar väger tyngre än frihet - Responsibility trumps liberty

30 juni 2019

Intentional agents as leaky abstractions

I have spent the day reading Kaj Sotala's sequence "Multiagent Models of Mind" on LessWrong. Six posts are published so far. At least three more are planned; they look really interesting, and I hope that they will answer my main question so far:

What does this actually mean, and what is the motivation for saying it?

Agent-ness being a leaky abstraction is not exactly a novel concept for Less Wrong; it has been touched upon several times, such as in Scott Alexander’s Blue-Minimizing Robot Sequence. At the same time, I do not think that it has been quite fully internalized yet, and that many foundational posts on LW go wrong due to being premised on the assumption of humans being agents. In fact, I would go as far as to claim that this is the biggest flaw of the original Sequences: they were attempting to explain many failures of rationality as being due to cognitive biases, when in retrospect it looks like understanding cognitive biases doesn’t actually make you substantially more effective. But if you are implicitly modeling humans as goal-directed agents, then cognitive biases is the most natural place for irrationality to emerge from, so it makes sense to focus the most on there.

This was what piqued my interest in reading, and what kept me going for five hours straight (!) But I didn't find what I was looking for. I did get a lot of other useful information, though: All in all, this is a wonderful, brilliant, deep and highly thought-provoking text. A lot of work and thought has gone into it. Kaj is surely one of the brightest minds around.

Now, I have no problem whatsoever with the first part of the passage quoted above. Of course intentional agents are an abstraction, and as such of course it is leaky. My concern lies with the second part: It seems to suggest that viewing people as intentional agents is mistaken; to coarse a model; misleading. Which seems to lead to the conclusion that cognitive biases are the wrong way to characterize human thinking and behavior. Which suggests that they are not even real...

I may be overly trigger-happy here. I am not out to criticize Sotala himself, nor - as it turns out - any major part of what he actually has written in this sequence so far. It is just that I am currently (yet again) in a process of investigation and possible re-orientation of what I believe is best characterized as the latest round of the "rationality wars". I am currently reading Gerd Gigerenzer's book "Risk Savvy", in a long stretch of similar stuff (e.g. Mercier & Sperber), with the aim of trying to reconcile the seemingly opposing sides in an ever ongoing battle for the right to define "rationality".

I am a long-time fan of Kahneman (and Dennett). It may be self-delusion on my part, but much of the criticism leveled at him and others seem to me either plain wrong, ideologically motivated or mistaken. The more I read, the more I get the feeling that my intuitive interpretation of Kahneman and others does not need updating; rather, it is his critics who either straw-man him or just do not have the whole picture.

Sotala promises to tell me why the biases and fallacies school of thought is lacking. But I just don't see it.

I would go as far as to claim that this is the biggest flaw of the original Sequences: they were attempting to explain many failures of rationality as being due to cognitive biases, when in retrospect it looks like understanding cognitive biases doesn’t actually make you substantially more effective. But if you are implicitly modeling humans as goal-directed agents, then cognitive biases is the most natural place for irrationality to emerge from, so it makes sense to focus the most on there. 
Just knowing that an abstraction leaks isn’t enough to improve your thinking, however. To do better, you need to know about the actual underlying details to get a better model. In this sequence, I will aim to elaborate on various tools for thinking about minds which look at humans in more granular detail than the classical agent model does. Hopefully, this will help us better get past the old paradigm.


There is a sense in which I get this: Higher-level abstractions trade accuracy for expediency, yes. And sometimes you need to go down an explanatory level or two, depending on your goals. But when it comes to explaining human decision making, or how humans view themselves and others and the society that emerges from their interaction, or why this is the case, or how and when problems and contradictions occur, or what to do about it... Well, I just don't see the need to shed the intentional stance or to deconstruct it. (Apart from convincing people that they are usually MoreWrong than they think.)

My main question when reading the above was - and still is: Are we talking descriptively, prescriptively, or normatively?

Mercier & Sperber, for instance, accuse Kahneman of assuming a logical, but flawed, human psyche. Rationality to them seems to mean a description of what people are, what they have evolved into - even if the process is incomplete. They then go on to sarcastically point out that there is no evolutionary reason to expect people to be logical inference machines. At the same time they redefine rationality to mean "socially flexible and pragmatic" rather than logical, and to contend that that is exactly what people are - so stop shaming them for not being able to solve logical puzzles. Also, they and Gigerenzer and others go on to say: "Oh, and by the way, people are quite adept at logic, statistics and probabilities - if you just stop tricking them!"

To me, this is highly confused. Man is not the measure of everything. Rationality, meaning logic, statistical thinking, utility maximizing etc, is a cultural invention, a norm, a standard to which we aspire - and should aspire. The fact that we have a hard time living up to those ideals is an observation of facts, and there are plenty of good reasons why this is the case. But it is equally obvious that we should do whatever we can to get beter at it. We can't (yet) re-engineer ourselves, so we need to work on education, societal structures, political systems etc.

Gigerenzer thinks Sunstein is an autocrat who doesn't trust people to know their own good. I think that Sunstein is way to libertarian.


---


One line of evidence for this are subliminal priming experiments, not to be confused with the controversial “social priming” effects in social psychology; unlike those effects, these kinds of priming experiments are well-defined and have been replicated many times.


Is there a difference? Sotala never uses the term, but I constantly think of associative networks (and perceptrons). Priming is potential for spreading activation. Priming is priming, however mixed results and exaggerated results from sloppy social-psychology experiments.



---



First, in order for the robot to take physical actions, the intent to do so has to be in its consciousness for a long enough time for the action to be taken. If there are any subagents that wish to prevent this from happening, they must muster enough votes to bring into consciousness some other mental object replacing that intention before it’s been around for enough time-steps to be executed by the motor system. (This is analogous to the concept of the final veto in humans, where consciousness is the last place to block pre-consciously initiated actions before they are taken.)


Oh, oh, oh! Veto without a libertarian prime mover. Yes! This resolves the tension I experienced when reading Patrik Lindenfors' speculations on free will in his new book "The Cultural Animal". Libet-experiments should measure several different signals simultaneously.


---


Second, the different subagents do not see each other directly: they only see the consequences of each other’s actions, as that’s what’s reflected in the contents of the workspace. In particular, the self-narrative agent has no access to information about which subagents were responsible for generating which physical action. It only sees the intentions which preceded the various actions, and the actions themselves. Thus it might easily end up constructing a narrative which creates the internal appearance of a single agent, even though the system is actually composed of multiple subagents.


Oh! Self-serving bias, confabulation, FAE, myside bias... But what is the difference in practice? It is still a case of self-deception.


---


Third, even if the subagents can’t directly see each other, they might still end up forming alliances. For example, if the robot is standing near the stove, a curiosity-driven subagent might propose poking at the stove (“I want to see if this causes us to burn ourselves again!”), while the default planning system might propose cooking dinner, since that’s what it predicts will please the human owner. Now, a manager trying to prevent a fear model agent from being activated, will eventually learn that if it votes for the default planning system’s intentions to cook dinner (which it saw earlier), then the curiosity-driven agent is less likely to get its intentions into consciousness. Thus, no poking at the stove, and the manager’s and the default planning system’s goals end up aligned. 
Fourth, this design can make it really difficult for the robot to even become aware of the existence of some managers. A manager may learn to support any other mental processes which block the robot from taking specific actions. It does it by voting in favor of mental objects which orient behavior towards anything else. This might manifest as something subtle, such as a mysterious lack of interest towards something that sounds like a good idea in principle, or just repeatedly forgetting to do something, as the robot always seems to get distracted by something else. The self-narrative agent, not having any idea of what’s going on, might just explain this as “Robby the Robot is forgetful sometimes” in its internal narrative.


Ah! Dunning-Kruger, ignorance, witness psychology, unwarranted self-assurance... But what is the difference from intentional agents and the bias and fallacies perspective?



---


Fifth, the default planning subagent here is doing something like rational planning, but given its weak voting power, it’s likely to be overruled if other subagents disagree with it (unless some subagents also agree with it). If some actions seem worth doing, but there are managers which are blocking it and the default planning subagent doesn’t have an explicit representation of them, this can manifest as all kinds of procrastinating behaviors and numerous failed attempts for the default planning system to “try to get itself to do something”, using various strategies. But as long as the managers keep blocking those actions, the system is likely to remain stuc


Aha! Akrasia, ”irrationality” in the sense of not living up to the homo economics template etc...


---


Sixth, the purpose of both managers and firefighters is to keep the robot out of a situation that has been previously designated as dangerous. Managers do this by trying to pre-emptively block actions that would cause the fear model agent to activate; firefighters do this by trying to take actions which shut down the fear model agent after it has activated. But the fear model agent activating is not actually the same thing as being in a dangerous situation. Thus, both managers and firefighters may fall victim to Goodhart’s law, doing things which block the fear model while being irrelevant for escaping catastrophic situations.”

But this is missing an evolutionary perspective (which Sotala brings up much later) outside of the individual agent. Systems that are reasonably well adjusted beget offspring with pre-installed settings that also work reasonably well (as long as the environment doesn't change too much).

Goodhart! Yes! Isn't that the perfect summation of every bias in the book!?


It's a (too) tall order to get people to change their evolved picture of themselves and others, from intentional agents to more or less coordinated subsystems.

Normatively, also we want to act, judge and be judged as intentional agents.



---



Exiles are said to be parts of the mind which hold the memory of past traumatic events, which the person did not have the resources to handle. They are parts of the psyche which have been split off from the rest and are frozen in time of the traumatic event. When something causes them to surface, they tend to flood the mind with pain. For example, someone may have an exile associated with times when they were romantically rejected in the past. 

IFS further claims that you can treat these parts as something like independent subpersonalities. You can communicate with them, consider their worries, and gradually persuade managers and firefighters to give you access to the exiles that have been kept away from consciousness. When you do this, you can show them that you are no longer in the situation which was catastrophic before, and now have the resources to handle it if something similar was to happen again. This heals the exile, and also lets the managers and firefighters assume better, healthier roles.


Very Freudian! Both in a good sense and in a bad one. (And I suspect that the IFS crowd really longs for a true Self - which is exactly what there isn't!)


---



In my earlier post, I remarked that you could view language as a way of joining two people’s brains together. A subagent in your brain outputs something that appears in your consciousness, you communicate it to a friend, it appears in their consciousness, subagents in your friend’s brain manipulate the information somehow, and then they send it back to your consciousness. 
If you are telling your friend about your trauma, you are in a sense joining your workspaces together, and letting some subagents in your workspace, communicate with the “sympathetic listener” subagents in your friend’s workspace. So why not let a “sympathetic listener” subagent in your workspace, hook up directly with the traumatized subagents that are also in your own workspace?


Yeah... This is what Mercier & Sperber get right - social cognition. But a bit too idealized on communication with others. There is a lot of "pollution" in those exchanges... Even internal monologues are polluted by irrelevant concerns and noise.


---



Instead of remaining blended, you then use various unblending / cognitive defusion techniques that highlight the way by which these thoughts and emotions are coming from a specific part of your mind. You could think of this as wrapping extra content around the thoughts and emotions, and then seeing them through the wrapper (which is obviously not-you), rather than experiencing the thoughts and emotions directly (which you might experience as your own). 
...when I became aware of how much time I spent on useless rumination while on walks, I got frustrated. And this seems to have contributed to making me ruminate less: as the system’s actions and their overall effect were metacognitively represented and made available for the system’s decision-making, this had the effect of the system adjusting its behavior to tune down activity that was deemed useless.

Creativity? Heureka moments? Openness to new impressions? (This is discussed later.)

---


Similarly, several of the experiments which get people to exhibit incoherent behavior rely on showing different groups of people different formulations of the same question, and then indicating that different framings of the same question get different answers from people. It doesn’t work quite as well if you show the different formulations to the same people, because then many of them will realize that differing answers would be inconsistent.

This is the point of contention in the rationality wars!


---


The original question which motivated this section was: why are we sometimes incapable of adopting a new habit or abandoning an old one, despite knowing that to be a good idea? And the answer is: because we don’t know that such a change would be a good idea. Rather, some subsystems think that it would be a good idea, but other subsystems remain unconvinced. Thus the system’s overall judgment is that the old behavior should be maintained.

Yees! But normatively, we can know that something is better, while emotionally we do not experience it that way. This is what a bias is!



---



Nevertheless, a fundamental problem remains: at any point in time, which mode should be allowed to control which component of a task? Daw et al. have used a computational approach to address this problem. Their analysis was based on the recognition that goal-directed responding is flexible but slow and carries comparatively high computational costs as opposed to the fast but inflexible habitual mode. They proposed a model in which the relative uncertainty of predictions made by each control system is tracked. In any situation, the control system with the most accurate predictions comes to direct behavioural output. 
Note those last sentences: besides the subsystems making their own predictions, there might also be a meta-learning system keeping track of which other subsystems tend to make the most accurate predictions in each situation, giving extra weight to the bids of the subsystem which has tended to perform the best in that situation. We’ll come back to that in future posts.

Automatic vs controlled processes (system 1 and 2). Again, a tall order to transition from the former to the latter. Energy conservation. But also, built-in inertia to avoid paralysis (see Minsky quote):


”Human self-control is no simple skill, but an ever-growing world of expertise that reaches into everything we do. Why is it that, in the end, so few of our self-incentive tricks work well? Because, as we have seen, directness is too dangerous. If self-control were easy to obtain, we'd end up accomplishing nothing at all.”


---


When there is significant uncertainty, the brain seems to fall back to those responses which have worked the best in the past - which seems like a reasonable approach, given that intelligence involves hitting tiny targets in a huge search space, so most novel responses are likely to be wrong.

Also over evolutionary time, over generations. Bias as hard-coded patterns which have previously comprised the best compromise.

---



...positive or negative moods tend to be related to whether things are going better or worse than expected, and suggest that mood is a computational representation of momentum, acting as a sort of global update to our reward expectations.

Yeeesss!!!


---


So to repeat the summary that I had in the beginning: we are capable of changing our behaviors on occasions when the mind-system as a whole puts sufficiently high probability on the new behavior being better, when the new behavior is not being blocked by a particular highly weighted subagent (such as an IFS protector whose bids get a lot of weight) that puts high probability on it being bad, and when we have enough slack in our lives for any new behaviors to be evaluated in the first place. Akrasia is subagent disagreement about what to do.


This is perfectly in line with the bias perspective.


---



Likewise, the subagent frame seems most useful when a person’s goals interact in such a way that applying the intentional stance - thinking in terms of the beliefs and goals of the individual subagents - is useful for modeling the overall interactions of the subagents.

Confusing. Wasn't the whole point to question the intentional agent, the system as a whole, the unmoved mover, the green man at the center of it all?



---



More generally, subagents may be incentivized to resist belief updating for at least three different reasons (this list is not intended to be exhaustive): 
1 The subagent is trying to pursue or maintain a goal, and predicts that revising some particular belief would make the person less motivated to pursue or maintain the goal. 
2 The subagent is trying to safeguard the person’s social standing, and predicts that not understanding or integrating something will be safer, give the person an advantage in negotiation, or be otherwise socially beneficial. For instance, different subagents holding conflicting beliefs allows a person to verbally believe in one thing while still not acting accordingly - even actively changing their verbal model so as to avoid falsifying the invisible dragon in the garage. 
3 Evaluating a belief would require activating a memory of a traumatic event that the belief is related to, and the subagent is trying to keep that memory suppressed as part of an exile-protector dynamic.


Reminds me of Omohundro's thesis on goal-preservation. (And Olle's problematization of the same...)


---



Suppose that a disease, or a monster, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
1. Save 400 lives, with certainty.
2. Save 500 lives, with 90% probability; save no lives, 10% probability.
Most people choose option 1. [...] If you present the options this way:
1. 100 people die, with certainty.
2. 90% chance no one dies; 10% chance 500 people die.
 
Then a majority choose option 2. Even though it's the same gamble. You see, just as a certainty of saving 400 lives seems to feel so much more comfortable than an unsure gain, so too, a certain loss feels worse than an uncertain one. 
In my previous post, I presented a model where subagents which are most strongly activated by the situation are the ones that get access to the motor system. If you are hungry and have a meal in front of you, the possibility of eating is the most salient and valuable feature of the situation. As a result, subagents which want you to eat get the most decision-making power. On the other hand, if this is a restaurant in Jurassic Park and a velociraptor suddenly charges through the window, then the dangerous aspects of the situation become most salient. That lets the subagents which want you to flee to get the most decision-making power. 
Eliezer’s explanation of the saving lives dilemma is that in the first framing, the certainty of saving 400 lives is salient, whereas in the second explanation the certainty of losing 100 lives is salient. We can interpret this in similar terms as the “eat or run” dilemma: the action which gets chosen, depends on which features are the most salient and how those features activate different subagents (or how those features highlight different priorities, if we are not using the subagent frame). 
Suppose that you are someone who was tempted to choose option 1 when you were presented with the first framing, and option 2 when you were presented with the second framing. It is now pointed out to you that these are actually exactly equivalent. You realize that it would be inconsistent to prefer one option over the other just depending on the framing. Furthermore, and maybe even more crucially, realizing this makes both the “certainty of saving 400 lives” and “certainty of losing 100 lives” features become equally salient. That puts the relevant subagents (priorities) on more equal terms, as they are both activated to the same extent. 
What happens next depends on what the relative strengths of those subagents (priorities) are otherwise, and whether you happen to know about expected value. Maybe you consider the situation and one of the two subagents (priorities) happens to be stronger, so you decide to consistently save 400 or consistently lose 100 lives in both situations. Alternatively, the conflicting priorities may be resolved by introducing the rule that “when detecting this kind of a dilemma, convert both options into an expected value of lives saved, and pick the option with the higher value”. 
By converting the options to an expected value, one can get a basis by which two otherwise equal options can be evaluated and chosen between. Another way of looking at it is that this is bringing in a third kind of consideration/subagent (knowledge of the decision-theoretically optimal decision) in order to resolve the tie.


1. 400 survivors is not interpreted as 100 deaths, but rather as "at most 100 deaths".

2. This is Gigerenxer's schtick: "We don't really have any biases. It's just a question of presenting or rephrasing situations so that it becomes obvious how to deal with them." But it is precisely the fact that this is not done which comprises the bias! (That, and the fact that we don't even experience any need to rephrase the situation.)

3. What is the rationale for preferring expected utility over, say, a sure positive? How does one resolve that conflict, before and after the choice? To oneself? To others?


---



The structure of the “parking ticket” and “cheque” scenarios are equivalent, in that both cases you can take an action to be $90 better off after 30 days. If you notice this, then it may be possible for you to re-interpret the action of paying off the parking ticket as something that gains you money, maybe by something like literally looking at it and imagining it as a cheque that you can cash in, until cashing it in starts feeling actively pleasant.

No. In one case, you lose something, or end up owing something that you may not even have. You wouldn't survive if you had to give away your food. In the other case, you go from surviving as usual to receiving a windfall, an extra bonus. This is exactly the kind of ill-conceived homo economicus rationality that even the economists have abandoned.


---

Reading through my notes, I am starting to wonder if what you're really saying is this: "There is no man in the middle, no unmoved mover, no central control to which we can ascribe beliefs and desires, or hold accountable; who is the author of our destiny, the locus of our (free) will."

And of course I agree.

Maybe your point is that viewing ourselves and others as intentional agents create or reinforce these misconceptions; that we need to understand that we *don't* actually have good reasons for thinking, feeling and doing what we do. That to humble ourselves, we need to understand that the self, the agent, is a figment of our imagination, an illusion to explain our subconscious elephant to our translucent rider...

And I agree.

But still: The best way to summarize the totality is the intentional agent. Maybe this is the reason why I am confused: I have always, ever since childhood, been perfectly onboard with a super-cynical view of people as biological contraptions, recently endowed with (an experience of) (self-)awareness; trying to make sense of the (apparent) voices inside our heads.

The intentional agent is a big improvement over many previous centuries of an over-inflated sense of importance. It is a description of how we have evolved to navigate in the world and coordinate with other moving objects. It is an "as if"-model. Nothing more. This is blatantly obvious to me.

The biases-and-fallacies paradigm serves as an educational device in the service of convincing people who think that we know what we (and others around us) are doing, that we don't. Or at least, that our guesses are just that: shortcuts that try to minimize catastrophic failures in a maximum number of (familiar) situations.


21 juni 2019

Gigerenzer: Risk Savvy



It is sometimes said that for Kahneman, the glass of rationality is half-empty, and for Gigerenzer, the glass is half-full. One is a pessimist, the other an optimist. That characterization misses the point. We differ in what the glass of rationality is in the first place. Kahneman and followers take logic or probability theory as a general, ”content-blind” norm of rationality. In their thinking, heuristics can necer be more accurate, only faster. That, however, is true only in a world of known risk. In an uncertain world, simple heuristics often can do better. The real research question is to understand why and when. The answers we know today are based on the bias-variance dilemma ... and the general study of ecological rationality ...


Den här boken är både sämre och bättre än jag förväntade mig. Den innehåller flera intressanta och användbara begrepp. Men de är inpackade i en lätt raljerande, rättshaveristisk, libertariansk och anekdotisk förpackning.

Det är fascinerande att få en inblick i Gigerenzers huvud. Han är en uppenbart intelligent, noggrann och effektiv människa och hans ambition är åtminstone på pappret vällovlig: Att minska mängden onödig dumhet i världen. Flera av hans råd är vettiga och pragmatiska.

Men på något sätt förmedlar han samtidigt ett budskap om att världen skulle fungera bättre om människor *inte* avkrävdes skäl för sina handlingar; om individuell och samhällelig intuition ges större utrymme, samtidigt som enskilda individer, genom att ta sig i kragen och tillägna sig några enkla tumregler, reste sig över mänskliga svagheter och danade sig till Nietcheanska herrar över sina egna öden.

Han säger t.ex. angående Sunstein och Thaler att mjuk (libertariansk) paternalism är *mer* radikal än traditionell ”hård” dito eftersom den förra utgår från att människor inte ens i teorin vet sitt eget (och andras) bästa. Att förekomsten av kognitiva snedvridningar *inte* fordrar paternalism av något slag; att de *inte* är genetiskt betingade utan resultatet av en alltför utarmad miljö och utbildning.

Detta vore ju ett positivt budskap, och nästan i linje med mina egna tankar och förhoppningar - om det inte vore för att alla observationer, inte minst hans egna, pekar åt ett helt annat håll. Dessutom undrar man ju hur utbildningsväsende och politiskt beslutsfattande och ansvarsutkrävande m.m. skulle fungera i den Hobbeska värld han verkar se framför sig.

Det är en kluvenhet som jag känner igen från andra håll, och som jag nog aldrig kommer att begripa mig på.

Det här är ju en flygplatsbok, och inte ett av Gigerenzers tyngre verk. Men jag hade trott att han ändå skulle vara mer stringent och neutral. I jämförelse framstår Hanson, Yudkowski, Mercier m.fl. som betydligt mer balanserade och socialt ansvarstagande.

Just den där tankefiguren: Bara ta dig i kragen! Å ena sidan är den häpnadsväckande naiv. Å andra sidan avspeglar den nog egentligen en känsla av att ”Ha, jag är så smart. Bara gör som jag. (Och kan du inte det, så desto bättre för mig.)”

Sunstein: Why Change Happens

80,000 Hours: Sunstein on how change happens, and why it's so often abrupt & unpredictable

Intressant och bra föredrag av Sunstein. Användbara begrepp och föredömligt tydligt och välstrukturerat.

Men är det inte lite Kejsarens nya kläder - i nya kläder?

Och framför allt: Gapar inte ett stort svart hål i mitten av hela resonemanget? Sociologin är där, men hur är det med psykologin. Och med filosofin?

Det handlar om hur vi människor påverkar och påverkas av varandra, och om hur sociala rörelser får fart. Rent deskriptivt verkar det mycket rimligt. (Men samtidigt självklart och välbekant.)

Sunstein säger att man kan kontrollera förutsättningarna för att sociala rörelser ska ta fart, och på så sätt begränsa eller främja dem. Men jag frågar mig *vilka* rörelser som bör begränsas eller främjas, och varför. Det skrämmer mig att dessa frågor inte ens verkar föresväva Sunstein.

Fall, or Dodge in Hell



900 sidor på fyra dagar. Rekord? Å ena sidan: tveksamt om det var värt tiden. Å andra sidan: perfekt tidsfördriv vid irriterande sjukdom.

Skum, svårrecenserad och lätt ångestframkallande bok. Jag har ingen aning om huruvida jag bör rekommendera den. Innehåller förvisso en hel del intressanta grejor, men känns onödigt lång. Fast kanske måste den vara det för att förmedla sitt budskap? Eller?

Jag kan instämma i recensionerna i NYT och WIRED, men det säger egentligen inte så mycket.

Framförallt andra halvan är som en feberdröm. Känns som något man själv skulle kunna skriva ner efter ett dilerium men aldrig skulle komma på tanken att faktiskt göra... Man vill helst bara glömma. Samtidigt... Idéerna runtomkring är ju bra... nästan självklara på något vis...

I efterordet namedroppas David Deutch (vilken jag själv kom att tänka på), Jaron Lanier och George Dyson. Jag har bara så jävla svårt att förstå vad den absurda detaljrikedomen tillför. Känner mig lurad, utnyttjad, att engagera mig i helt godtyckliga utsvävningar. Barnsligt på något vis.

Dr. List är en märklig figur

Dr. List är en märklig figur, både klar och förrvirrad på samma gång.

Science Salon: Why Free Will is Real: A response to Sam Harris, Jerry Coyne, and Other Determinists



1) Fri vilja har inte med determinism att göra.

2) Att växla förklaringsnivå köper dig inget annat än just bekvämlighet. Och kausalitet föreligger förstås på varje användbar nivå, även den intentionala. (Det fattar inte Schermer heller.)

3) Komplexitet, i bemärkelsen kaotiska input-outputrelationer, är inte icke-deterministisk.

4) Om kompatibilism handlar om något så är det huruvida moraliskt/juridiskt ansvar är förenligt med det faktum att fri vilja inte existerar. (Mitt svar på det är "nja".)


Kommentator 1:

Menar du att kaosalitet är förenlig med kausalitet?


Ja, definitivt! ;)


Kommentator 1:

Men hur är det med fri vilja och determinism, går de ihop? Och vad är fri vilja?


Fri vilja - i den enda bemärkelse jag accepterar - är en viss tolkning av Dennetts kompatibilistiska defintion: ”att du hade kunnat agera annorlunda”. Min tolkning är möjligen ännu något snävare än Dennetts egen, nämligen denna: att du är konfigurerad så att om du hade påverkats av andra faktorer än de som nu förelåg så hade du kunnat agera annorlunda. Ungefär.

I praktiken innebär detta att om vi ”spolar tillbaka bandet” så skulle du INTE agera annorlunda, eftersom exakt samma faktorer föreligger (determinism). MEN du är *påverkbar* på ett sätt som gör det rimligt att tro att du skulle kunna agera annorlunda i en framtida (mer eller mindre identisk) situation.

Du kan då hållas moraliskt/juridiskt ansvarig, åtminstone delvis eller i andra hand, för att inte följa lagar och regler, eftersom du skulle kunna/ha kunnat följa dem. Ungefär.

Retributiv rättvisa, upprördhet, skam och skuld är folkpsykologiska proxies för *försök till påverkan* av framtida beteende, både ditt och andras. Eller för eliminering av dig och/eller ditt beteende.
Du kan, enligt mig, vara skyldig till ett brott men aldrig vara ”värd” ett ”rättvist” straff. Antingen går du att påverka, och då ska det göras (rehabilitering). Eller kan du inte påverkas, men andra kan påverkas av ditt straff (avskräckning) och du bör eventuellt förhindras (frihetsberövande) från att agera felaktigt igen. Ungefär.

Notera att även om världen och/eller ditt eget handlande vore icke-deterministiskt så skulle detta inte innebära någon ”fri” vilja. Tvärtom, det skulle bara göra saken värre. För om det är slumpen som avgör vad du väljer att göra, så är det inte *du* som väljer och du är då *mindre fri* än i en (praktiskt taget) deterministisk värld/situation. Och då kan du heller inte påverkas på ett pålitligt sätt.


Kommentator 2:

Så om det inte är fallet att allt är fullt deterministiskt, vilka är det som ”hade kunnat agera annorlunda?” Människor? Växter? Mineraler? Molekyler? Elektroner? Kvantpartiklar? Vilka faktorer använde de för att välja sitt agerande?


Jag anar att du har en invändning mot mitt resonemang ovan, men jag är osäker på vad den är. Icke-determinism, ”val” och subjektivitet är termer som inte går bra ihop. Det är vi väl överens om?


Kommentator 2:

Nej det är vi nog inte. För det låter som om du säger det finns definitioner av de termerna som alla är överens om, och det vet jag inte om det är sant?

Men min fråga var mer om nån tror på "fri vilja" (vad än det betyder), var någonstans börjar fri vilja? För vem eller vad? Varför då, och varför där och inte någon annanstans?


Det jag i första hand vänder mig emot är den naiva och utbredda "folkliga" och oreflekterade libertarianska idén om fri vilja som - om man skrapar på den - är helt ohållbar. Den bygger på slump, dualism och på tanken på jaget som den orörda röraren. Den leder i sin tur till en felaktig uppfattning av moral och rättvisa som retributiv, och till allsköns vidskeplighet. Enligt List rapporterar uppemot 50% av "vanligt" folk en sådan uppfattning om man ber dem beskriva vad de tror.

I andra hand vänder jag mig emot den form av kompatibilism som verkar smuggla in något liknande, trots att den utgår från determinism, genom att vädja till vetomöjlighet på högre förklarings/abstraktions/intentionalitetsnivå.

Alla borde tvingas läsa Thomas Nagels "Moral Luck". Det är hög tid att vi lämnar primitiva idéer om skuld. Vi kan utmäta ansvar utan att lura oss själva.


Windup Girl: En skarp betraktelse

Ur romanen The Windup Girl av Paulo Bacigalupi:


Chaiyanuchit understood what was at stake, and what had to be done. When the borders needed closing, when ministries needed isolating, when Phuket and Chiang Mai needed razing, he did not hesitate. When jungle blooms exploded in the north, he burned and burned and burned, and when he took to the sky in His Majesty the Kings dirigible, Jaidee was blessed to ride with him.

By then, they were only mopping up. AgriGen and PurCal and the rest were shipping their plague-resistant seeds and demanding exorbitant profit, and patriotic gene rippers were already working to crack the code of the calorie companies' products, fighting to keep the Kingdom fed as Burma and the Vietnamese and the Khmer all fell. AgriGen and its ilk were threatening embargo over intellectual property infringement, but the Thai Kingdom was still alive. As others were crushed under the calorie companies' heels, the Kingdom stood strong.

Embargo! Chaiyanuchit had laughed. Embargo is precisely what we want! We do not wish to interact with the outside world at all.

And so the walls had gone up -- those that the oil collapse had not already created, those that had not been raised against civil war and starving refugees -- a final set of barriers to protect the Kingdom from the onslaughts of the outside world.

As a young inductee Jaidee had been astounded at the hive of activity that was the Environment Ministry. White shirts rushing from office to street as they tried to maintain tabs on thousands of hazards. In no other ministry was the sense of urgency so acute. Plagues waited for no one. A single gxnehack weevil found in an outlying district meant a response time counted in hours, white shirts on a kink-spring train rushing across the countryside to the epicenter.

And at every turn the Ministry's purview was expanding. The plagues were but the latest insult to the Kingdom's survival. First came the rising sea levels, the need to construct the dikes and levees. And then came the oversight of power contracts and trading in pollution credits and climate infractions. The white shirts took over the licensing of methane capture and production. Then there was the monitoring of fishery health and toxin accumulation in the Kingdom's final bastion of calorie support (a blessing that the farang calorie companies thought as land-locked people and had only desultorily attacked fishing stocks). And there was the tracking of human health and viruses and bacteria: H7V9; cibiscosis111b, c, d; fa' gan fringe; bitter water mussels, and their viral mutations that jumped so easily from saltwater to dry land; blister rust. . . There was no end to the duties of the Ministry.

Jaides passes a woman selling bananas. He can't resist hopping off his bike to buy one. It's a new varietal from the Ministry's rapid prototyping unit. Fast growing, resistant to makmak mites with their tiny black eggs that sicken banana flowers before they can hope to grow. He peels the banana and eats it greedily as he pushes his bike along, wishing he could take the time to have a real snack. He discards the peel beside the bulk of a rain tree.

All life produces waste. The act of living produces costs, hazards and disposal questions, and so the Ministry has found itself in the center of all life, mitigating, guiding and policing the detritus of the average person along with investigating the infractions of the greedy and short-sighted, the ones who wish to make quick profits and trade on other's lives for it.

The symbol for the Environment Ministry is the eye of a tortoise, for the long view -- the understanding that nothing comes cheap or quickly without a hidden cost. And if others call them the Turtle Ministry, and if the Chaouzhou Chinese now curse white shirts as turtle's eggs because they are not allowed to manufacture as many kink-spring scooters as they would like, so be it. If the farang make fun of the tortoise for its slow pace, so be it. The Environment Ministry has ensured that the Kingdom endures, and Jaidee can only stand in awe of its past glories.

And yet, when Jaidee climbs off his bicycle outside the Ministry gate, a man glares at him and a woman turns away. Even just outside their own compound -- or perhaps particularly there -- the people he protects turn away from him.

Jaidee grimaces and wheel his cycle past the guards.

The compound is still a hive of activity, and yet it is so different from when he first joined. There is mold on the walls and chunks of the edifice are cracking under the pressure of vines. An old bo tree leans against a wall, rotting, underlining their failures. It has lain so for ten years, rotting. Unremarked amongst the other things that have also died. There is an air of wreckage to the place, of jungle attempting to reclaim what was carved from it. If the vines were not cleared from the paths, the Ministry would disappear entirely. In a different time, when the Ministry was a hero of the people, it was different. Then, the people genuflected before Ministry officers, three times khrabbed to the ground as though they were monks themselves, their white uniforms inspiring respect and adoration. Now Jaidee watches civilians flinch as he walks past. Flinch and run.

He is a bully, he thinks sourly. Nothing but a bully walking amongst water buffalo, and though he tries to herd them with kindness, again and again, he finds himself using the whip of fear. The whole Ministry is the same -- at least, those who still understand the dangers that they face, who still believe in the bright white line of protection that must be maintained.

I am a bully.

He sighs and parks the cycle in front of the administrative offices, which are desperately in need of a whitewashing that the shrinking budget cannot finance. Jaidee eyes the building, wondering if the Ministry has come to crisis thanks to overreaching, or because of its phenomenal success. People have lost their fear of the outside world. Environment's budget shrinks yearly wjule that of Trade increases.

(s. 172-175)

7 juni 2019

Handmaid's Tale är ett pekoral av guds nåde!



Nu måste det sägas: Handmaid's Tale är ett pekoral av guds nåde!

Jag är nog inte den ende som avvaktat in i det längsta med att framstå som vanvördig mot realiseringen av det i grunden högst relevanta stoffet, och de snygga litterära idéerna i förlagan.

Men nu har vi genomlidit drygt tjugo timmar av oändligt långsamt berättande och måste konstatera att de utdragna scenerna inte tillför något till vare sig handling, insikt eller känslomässigt engagemang. I stället får vi nog dra slutsatsen att seriens producenter cyniskt maximerar antalet tittartimmar.

Detta leder till andra, mer generella, slutsatser och spekulationer.

Folk tittar uppenbarligen fortfarande. Det kan bero på att folk i allmänhet - liksom jag - känner sig skyldiga att ha överseende med longörerna. Kanske t.o.m. att klandra sig själva för att störas av dem, men att av respekt undantränga sådant; skylla det på personliga tillkortakommanden och istället "uppfostra" sig själv till en disciplinerad konsument av högkulturella produkter.

Det kan också bero på att publiken i allmänhet inte förmår göra skillnad på bra drama och mycket drama. I någon mening... I det här fallet - p.g.a. uttröttningseffekter (vi har ju sedan länge sugit åt oss allt det som faktisk är bra i produktionen) - kvarstår egentligen endast tidsutsträckning).

Man kanske tror sig titta på god/viktig konst (för inte fan är det underhållande), och detta kan i sin tur delvis bero på att man inte har eller fått chansen att jämföra med bättre alternativ.

Detsamma gäller producenterna. De behöver förstås inte hålla högre klass än konkurrenterna, eller den lägsta nivå som krävs för att fylla TV-sofforna. Och när en sådan situation föreligger under en längre tid så får de själva inga förebilder att eftersträva.

Och nu kommer vi till min egentliga misstanke (vilken kanske är naiv): Jag tror inte att producenterna själva förmår att göra skillnad på drama och (bra) drama.

Och jag tror att detta i sin tur beror på att amerikaner i allmänhet inte äger en bråkdel av det djup och den sofistikation som krävs, vare sig för att skapa eller uppskatta bra drama.

Vad som är hönan och vad som är ägget, det vet jag inte. Men jag har i princip aldrig sett någon amerikansk filmproduktion, om än aldrig så pretentiös, som kommer i närheten av vad exempelvis danskar producerar och konsumerar -- till vardags!

Senaste exemplet var tredje säsongen av Follow the Money. Visst, om jag höjer ribban ordentligt så skulle jag kunna beskriva den som ingenjörsmässig, för att inte säga manipulativ och populistisk, löpande band-underhållning. Men i jämförelse med Handmaid's Tale skulle jag snarare karaktärisera den som kirurgiskt finkänslig och med gravitas i nivå med en rejäl käftsmäll!

(Och skådespeleriet!!!!)

Och detta är ändå bara ett exempel på just dansk rutinmässighet! Vi vet ju alla vad de producerar, regelbundet, utöver detta.

När jag efter att ha sett sista avsnittet av (tredje säsongen av) Follow the Money tittade på de tre första avsnitten av (tredje säsongen av) Handmaid's Tale så kände jag starkt:

Producenterna av den senare har aldrig sett något så bra som danskt "rutin-drama".

Och hade de gjort det så skulle de inte förmå att uppskatta det!

Jag tror verkligen detta.

Jag tror att kulturproduktionen i USA (inklusive populärkultur) utgör ett självbegränsande kretslopp, där grunda känslor och tankar föder grunda alster vilka i sin tur göder grunda känslor och tankar. De har inte en chans. Har aldrig haft. Kommer aldrig att få.

Jag skäms inte över all tid jag lägger och har lagt på att konsumera amerikanskt drama. Men jag skäms - och skräms - över hur ofta och hur snabbt jag förlorar förmågan att sätta ribban tillräckligt högt, och ens ha vett att sakna allt det som inte uppnås.

Om jag bara hade tillgång till amerikanskt drama så skulle jag krympa till en intellektuell pygmé på nolltid. Jag skulle aldrig få chansen att växa till något mer. Och det har nog inte producenterna av Handmaid's Tale gjort heller.

1 juni 2019

Why They Can't Write

John Warner: Why They Can't Write


De inledande åtta sidorna går rakt på pudelns kärna. Här har vi problemet; hela problemet och inget annat än problemet. Med dagens skola, men också med det samhälle som skapat den och som skapas av den.

Min indignation och mitt raseri väcks igen. (Jag har varit tvungen att dämpa och förtränga för att stå ut.)

Diagnosen är kristallklar. Likaså analysen av orsakerna. Men frågan är om den föreslagna boten räcker...

Jag har inte kommit längre än till första kapitlet, men det verkar finnas en tendens att lägga fokus enbart på bättre undervisning och ignorera det uppenbara faktum - vilket korrekt identifierats som en stor del av problemet - att ALLA ÄR INTE, OCH KAN/VILL INTE BLI AKADEMIKER. Åtminstone inte inom ramen för ett standardiserat utbildningssystem.

Vad Warner säger, egentligen, är att vi inte kan tillåta oss att låtsas att mekaniskt regelföljande duger som bevis på - eller ersättning för - gediget tänkande. Men samtidigt kan jag inte se hur dagens skola ska kunna komma till en punkt där vi faktiskt får bedöma och sortera elever efter deras faktiska intellektuella kapacitet eller motivation. Det är ju helt tabu.

Heck, nuförtiden är det ju nästan tabu att ens bedöma faktakunskaper!

---

Mycket av det Warner säger när han beskriver hur krävande skrivprocessen egentligen är stämmer också in på programmering. Men skrivande är MER krävande än programmering. Trots det upplevs programmering som svårare, eftersom där kommer du ingenstans utan att först ha gjort grundarbetet. I vanligt ”skrivande” kan du däremot ordbajsa UTAN att ha gjort vare sig grundarbetet eller det ytterligare arbete som krävs för att både innehåll och form ska vara värt något. Här finns också en parallell till ”svår” matte, naturvetenskap och experimentella GA å ena sidan, och ”lätta” litteraturstudier, öppna frågor, kvalitativa undersökningar etc. å den andra.

---

Del två är mycket tillfredsställande, eftersom den artikulerar alla lärares frustrationer så väl. Jag ska just börja på tredje delen. Det ska verkligen bli intressant att läsa den modiga konkretiseringen, ty jag undrar om och hur problemen kan lösas...

Betygsättning och sortering verkar inte passa in i Warners bild av en god lärmiljö, men jag har svårt att se ett utbildningsväsen utan sådant. Jag har också svårt att se hur högkvalitativ och högkvalificerad undervisning kan bedrivas i annat än relativt homogena, välförberedda och välmotiverade elevgrupper.

---

Inledningen till del tre gör mig frustrerad. Som jag misstänkte blir det mycket Rousseau och Dewey. Jag är inte säker på att jag skriver under på Warners målsättning, men i den mån jag gör det så är den så radikal att det är omöjligt att behålla nuvarande strukturer - det måste till fullständigt genomgripande samhällsförändringar. Och jag undrar om Warner verkligen är sååå radikal.

Visst, i en socialistisk utopi där ingen behöver arbeta eller bekymra sig om livets nödvändigheter - där skulle vi kunna erbjuda alla obegränsade möjligheter att utveckla sina egna styrkor och intressen - utan betyg, krav eller sortering.

Och visst skulle vi kunna dumpa över all form av sortering och yrkesutbildning på näringslivet. Det är ju faktiskt slutsatsen av "Against Education" - varför ska samhället betala för en ineffektiv och onödig "meritering" som endast tjänar till att sålla fram nyttiga idioter åt företagen, i det fåfänga hoppet stt detta också ska ge oss bildade och upplysta demokratiska medborgare?

Men vi behöver också människor som förvaltar och utvecklar vetenskap, filosofi, sociologi etc. Genuina akademiker, intellektuella och vetenskapare. Där krävs sortering, disciplin och standardisering. 

För övrigt är det inte sömnbrist och hunger som plågar svenska elever. Det är apati, lättja, ignorans och arrogans. Visst är de socioekonomiska förutsättningarna viktigare än pedagogiska fixar, men även de som har goda förutsättningar saknar ofta något...

---

Jag instämmer helt i "Increasing rigor" och "Making writing meaningful". Detta strävar jag själv efter. Nästa års S+kurs kommer att bygga just på detta. Men det är långt ifrån oproblematiskt.

Jag håller med om allt i del fyra, och tycker nog att jag gör så gott jag kan där. Kapitlet "What about academics" sammanfattar egentligen vad vi vet att vi borde göra för att förbereda elever för sina GA, men verkligheten slår ju till...

Slutet av boken är en plan för hur vi skulle kunna jobba strukturerat med språkutveckling och programmål i år 1-3 inför och under GA. Typ Aranäs. Fokus på "tänk först, skriv sen". Warners lösning på att vi måste sätta betyg trots att det bara sabbar möjligheten till äkta engagemang, är att betygsätta kvantitet snarare än kvalitet - det ger också bäst progression, enligt honom.