Something I’ve been thinking about for a while is the dual relationship between optimization and indifference, and the relationship between both of them and the idea of freedom.
Optimization: “Of all the possible actions available to me, which one is best? (by some criterion). Ok, I’ll choose the best.”
Indifference: “Multiple possible options are equally good, or incommensurate (by the criterion I’m using). My decision algorithm equally allows me to take any of them.”
Total indifference between all options makes optimization impossible or vacuous. An optimization criterion which assigns a total ordering between all possibilities makes indifference vanishingly rare. So these notions are dual in a sense. Every dimension along which you optimize is in the domain of optimization; every dimension you leave “free” is in the domain of indifference.
Being “free” in one sense can mean “free to optimize”. I choose the outcome that is best according to an internal criterion, which is not blocked by external barriers. A limit on freedom is a constraint that keeps me away from my favorite choice. Either a natural limit (“I would like to do that but the technology doesn’t exist yet”) or a man-made limit (“I would like to do that but it’s illegal.”)
There’s an ambiguity here, of course, when it comes to whether you count “I would like to do that, but it would have a consequence I don’t like” as a limit on freedom. Is that a barrier blocking you from the optimal choice, or is it simply another way of saying that it’s not an optimal choice after all?
And, in the latter case, isn’t that basically equivalent to saying there is no such thing as a barrier to free choice? After all, “I would like to do that, but it’s illegal” is effectively the same thing as “I would like to do that, but it has a consequence I don’t like, such as going to jail.” You can get around this ambiguity in a political context by distinguishing natural from social barriers, but that’s not a particularly principled distinction.
Another issue with freedom-as-optimization is that it’s compatible with quite tightly constrained behavior, in a way that’s not consistent with our primitive intuitions about freedom. If you’re only “free” to do the optimal thing, that can mean you are free to do only one thing, all the time, as rigidly as a machine. If, for instance, you are only free to “act in your own best interests”, you don’t have the option to act against your best interests. People in real life can feel constrained by following a rigid algorithm even when they agree it’s “best”; “but what if I want to do something that’s not best?” Or, they can acknowledge they’re free to do what they choose, but are dismayed to learn that their choices are “dictated” as rigidly by habit and conditioning as they might have been by some human dictator.
An alternative notion of freedom might be freedom-as-arbitrariness. Freedom in the sense of “degrees of freedom” or “free group”, derived from the intuition that freedom means breadth of possibility rather than optimization power. You are only free if you could equally do any of a number of things, which ultimately means something like indifference.
This is the intuition behind claims like Viktor Frankl’s: “Between stimulus and response there is a space. In that space is our power to choose a response. In our response lies our growth and our freedom.” If you always respond automatically to a given stimulus, you have only one choice, and that makes you unfree in the sense of “degrees of freedom.”
Venkat Rao’s concept of freedom is pretty much this freedom-as-arbitrariness, with some more specific wrinkles. He mentions degrees of freedom (“dimensionality”) as well as “inscrutability”, the inability to predict one’s motion from the outside.
Buddhists also often speak of freedom more literally in terms of indifference, and there’s a very straightforward logic to this; you can only choose equally between A and B if you have been “liberated” from the attractions and aversions that constrain you to choose A over B. Those who insist that Buddhism is compatible with a fairly normal life say that after Buddhist practice you still will choose systematically most of the time — your utility function cannot fully flatten if you act like a living organism — but that, like Viktor Frankl’s ideal human, you will be able to reflect with equinamity and consider choosing B over A; you will be more “mentally flexible.” Of course, some Buddhist texts simply say that you become actually indifferent, and that sufficient vipassana meditation will make you indistinguishable from a corpse.
Freedom-as-indifference, I think, is lurking behind our intuitions about things like “rights” or “ownership.” When we say you have a “right” to free speech — even a right bounded with certain limits, as it of course always is in practice — we mean that within those limits, you may speak however you want. Your rights define a space, within which you may behave arbitrarily. Not optimally. A right, if it’s not to be vacuous, must mean the right to behave “badly” in some way or other. To own a piece of property means that, within whatever limits the concept of ownership sets, you may make use of it in any way you like, even in suboptimal ways.
This is very clearly illustrated by Glen Weyl’s notion of radical markets, which neatly disassociates two concepts usually both considered representative of free-market systems: ownership and economic efficiency. To own something just is to be able to hang onto it even when it is economically inefficient to do so. As Weyl says, “property is monopoly.” The owner of a piece of land can sit on it, making no improvements, while holding out for a high price; the owner of intellectual property can sit on it without using it; in exactly the same way that a monopolist can sit on a factory and depress output while charging higher prices than he could get away with in a competitive market.
For better or for worse, rights and ownership define spaces in which you can destroy value. If your car was subject to a perpetual auction and ownership tax as Weyl proposes, bashing your car to bits with a hammer would cost you even if you didn’t personally need a car, because it would hurt the rental or resale value and you’d still be paying tax. On some psychological level, I think this means you couldn’t feel fully secure in your possessions, only probabilistically likely to be able to provide for your needs. You only truly own what you have a right to wreck.
Freedom-as-a-space-of-arbitrary-action is also, I think, an intuition behind the fact that society (all societies, but the US more than other rich countries, I think) is shaped by people’s desire for more discretion in decisionmaking as opposed to transparent rubrics. College admissions, job applications, organizational codes of conduct, laws and tax codes, all are designed deliberately to allow ample discretion on the part of decisionmakers rather than restricting them to following “optimal” or “rational”, simple and legible, rules. Some discretion is necessary to ensure good outcomes; a wise human decisionmaker can always make the right decision in some hard cases where a mechanical checklist fails, simply because the human has more cognitive processing power than the checklist. This phenomenon is as old as Plato’s Laws and as current as the debate over algorithms and automation in medicine. However, what we observe in the world is more discretion than would be necessary, for the aforementioned reasons of cognitive complexity, to generate socially beneficial outcomes. We have discretion that enables corruption and special privileges in cases that pretty much nobody would claim to be ideal — rich parents buying their not-so-competent children Ivy League admissions, favored corporations voting themselves government subsidies. Decisionmakers want the “freedom” to make illegible choices, choices which would look “suboptimal” by naively sensible metrics like “performance” or “efficiency”, choices they would prefer not to reveal or explain to the public. Decisionmakers feel trapped when there’s too much “accountability” or “transparency”, and prefer a wider sphere of discretion. Or, to put it more unfavorably, they want to be free to destroy value.
And this is true at an individual psychological level too, of course — we want to be free to “waste time” and resist pressure to account for literally everything we do. Proponents of optimization insist that this is simply a failure mode from picking the wrong optimization target — rest, socializing, and entertainment are also needs, the optimal amount of time to devote to them isn’t zero, and you don’t have to consider personal time to be “stolen” or “wasted” or “bad”, you can, in principle, legibilize your entire life including your pleasures. Anything you wish you could do “in the dark”, off the record, you could also do “in the light,” explicitly and fully accounted for. If your boss uses “optimization” to mean overworking you, the problem is with your boss, not with optimization per se.
The freedom-as-arbitrariness impulse in us is skeptical.
I see optimization and arbitrariness everywhere now; I see intelligent people who more or less take one or another as ideologies, and see them as obviously correct.
Venkat Rao and Eric Weinstein are partisans of arbitrariness; they speak out in favor of “mediocrity” and against “excellence” respectively. The rationale being, that being highly optimized at some widely appreciated metric — being very intelligent, or very efficient, or something like that — is often less valuable than being creative, generating something in a part of the world that is “dark” to the rest of us, that is not even on our map as something to value and thus appears as lack of value. Ordinary people being “mediocre”, or talented people being “undisciplined” or “disreputable”, may be more creative than highly-optimized “top performers”.
Robin Hanson, by contrast, is a partisan of optimization; he speaks out against bias and unprincipled favoritism and in favor of systems like prediction markets which would force the “best ideas to win” in a fair competition. Proponents of ideas like radical markets, universal basic income, open borders, income-sharing agreements, or smart contracts (I’d here include, for instance, Vitalik Buterin) are also optimization partisans. These are legibilizing policies that, if optimally implemented, can always be Pareto improvements over the status quo; “whatever degree of wealth redistribution you prefer”, proponents claim, “surely it is better to achieve it in whatever way results in the least deadweight loss.” This is the very reason that they are not the policies that public choice theory would predict would emerge naturally in governments. Legibilizing policies allow little scope for discretion, so they don’t let policymakers give illegible rewards to allies and punishments to enemies. They reduce the scope of the “political”, i.e. that which is negotiated at the personal or group level, and replace it with an impersonal set of rules within which individuals are “free to choose” but not very “free to behave arbitrarily” since their actions are transparent and they must bear the costs of being in full view.
Optimization partisans are against weakly enforced rules — they say “if a rule is good, enforce it consistently; if a rule is bad, remove it; but selective enforcement is just another word for favoritism and corruption.” Illegibility partisans say that weakly enforced rules are the only way to incorporate valuable information — precisely that information which enforcers do not feel they can make explicit, either because it’s controversial or because it’s too complex to verbalize. “If you make everything explicit, you’ll dumb everything in the world down to what the stupidest and most truculent members of the public will accept. Say goodbye to any creative or challenging innovations!”
I see the value of arguments on both sides. However, I have positive (as opposed to normative) opinions that I don’t think everybody shares. I think that the world I see around me is moving in the direction of greater arbitrariness and has been since WWII or so (when much of US society, including scientific and technological research, was organized along military lines). I see arbitrariness as a thing that arises in “mature” or “late” organizations. Bigger, older companies are more “political” and more monopolistic. Bigger, older states and empires are more “corrupt” or “decadent.”
Arbitrariness has a tendency to protect those in power rather than out of power, though the correlation isn’t perfect. Zones that protect your ability to do “whatever” you want without incurring costs (which include zones of privacy or property) are protective, conservative forces — they allow people security. This often means protection for those who already have a lot; arbitrariness is often “elitist”; but it can also protect “underdogs” on the grounds of tradition, or protect them by shrouding them in secrecy. (Scott thought “illegibility” was a valuable defense of marginalized peoples like the Roma. Illegibility is not always the province of the powerful and privileged.) No; the people such zones of arbitrary, illegible freedom systematically harm are those who benefit from increased accountability and revealing of information. Whistleblowers and accusers; those who expect their merit/performance is good enough that displaying it will work to their advantage; those who call for change and want to display information to justify it; those who are newcomers or young and want a chance to demonstrate their value.
If your intuition is “you don’t know me, but you’ll like me if you give me a chance” or “you don’t know him, but you’ll be horrified when you find out what he did”, or “if you gave me a chance to explain, you’d agree”, or “if you just let me compete, I bet I could win”, then you want more optimization.
If your intuition is “I can’t explain, you wouldn’t understand” or “if you knew what I was really like, you’d see what an impostor I am”, or “malicious people will just use this information to take advantage of me and interpret everything in the worst possible light” or “I’m not for public consumption, I am my own sovereign person, I don’t owe everyone an explanation or justification for actions I have a right to do”, then you’ll want less optimization.
Of course, these aren’t so much static “personality traits” of a person as one’s assessment of the situation around oneself. The latter cluster is an assumption that you’re living in a social environment where there’s very little concordance of interests — people knowing more about you will allow them to more effectively harm you. The former cluster is an assumption that you’re living in an environment where there’s a great deal of concordance of interests — people knowing more about you will allow them to more effectively help you.
For instance, being “predictable” is, in Venkat’s writing, usually a bad thing, because it means you can be exploited by adversaries. Free people are “inscrutable.” In other contexts, such as parenting, being predictable is a good thing, because you want your kids to have an easier time learning how to “work” the house rules. You and your kid are not, most of the time, wily adversaries outwitting each other; conflicts are more likely to come from too much confusion or inconsistently enforced boundaries. Relationship advice and management advice usually recommends making yourself easier for your partners and employees to understand, never more inscrutable. (Sales advice, however, and occasionally advice for keeping romance alive in a marriage, sometimes recommends cultivating an aura of mystery, perhaps because it’s more adversarial.)
A related notion: wanting to join discussions is a sign of expecting a more cooperative world, while trying to keep people from joining your (private or illegible) communications is a sign of expecting a more adversarial world.
As social organizations “mature” and become larger, it becomes harder to enforce universal and impartial rules, harder to keep the larger population aligned on similar goals, and harder to comprehend the more complex phenomena in this larger group. . This means that there’s both motivation and opportunity to carve out “hidden” and “special” zones where arbitrary behavior can persist even when it would otherwise come with negative consequences.
New or small organizations, by contrast, must gain/create resources or die, so they have more motivation to “optimize” for resource production; and they’re simple, small, and/or homogeneous enough that legible optimization rules and goals and transparent communication are practical and widely embraced. “Security” is not available to begin with, so people mostly seek opportunity instead.
This theory explains, for instance, why US public policy is more fragmented, discretionary, and special-case-y, and less efficient and technocratic, than it is in other developed countries: the US is more racially diverse, which means, in a world where racism exists, that US civil institutions have evolved to allow ample opportunities to “play favorites” (giving special legal privileges to those with clout) in full generality, because a large population has historically been highly motivated to “play favorites” on the basis of race. Homogeneity makes a polity behave more like a “smaller” one, while diversity makes a polity behave more like a “larger” one.
Aesthetically, I think of optimization as corresponding to an “early” style, like Doric columns, or like Masaccio; simple, martial, all form and principle. Arbitrariness corresponds to a “late” style, like Corinthian columns or like Rubens: elaborate, sensual, full of details and personality.
The basic argument for optimization over arbitrariness is that it creates growth and value while arbitrariness creates stagnation.
Arbitrariness can’t really argue for itself as well, because communication itself is on the other side. Arbitrariness always looks illogical and inconsistent. It kind of is illogical and inconsistent. All it can say is “I’m going to defend my right to be wrong, because I don’t trust the world to understand me when I have a counterintuitive or hard-to-express or controversial reason for my choice. I don’t think I can get what I want by asking for it or explaining my reasons or playing ‘fair’.” And from the outside, you can’t always tell the difference between someone who thinks (perhaps correctly!) that the game is really rigged against them a profound level, and somebody who just wants to cheat or who isn’t thinking coherently. Sufficiently advanced cynicism is indistinguishable from malice and stupidity.
For a fairly sympathetic example, you see something like Darkness at Noon, where the protagonist thinks, “Logic inexorably points to Stalinism; but Stalinism is awful! Therefore, let me insist on some space free from the depredations of logic, some space where justice can be tempered by mercy and reason by emotion.” From the distance of many years, it’s easy to say that’s silly, that of course there are reasons not to support Stalin’s purges, that it’s totally unnecessary to reject logic and justice in order to object to killing innocents. But from inside the system, if all the arguments you know how to formulate are Stalinist, if all the “shoulds” and “oughts” around you are Stalinist, perhaps all you can articulate at first is “I know all this is right, of course, but I don’t like it.”
Not everything people call reason, logic, justice, or optimization, is in fact reasonable, logical, just, or optimal; so, a person needs some defenses against those claims of superiority. In particular, defenses that can shelter them even when they don’t know what’s wrong with the claims. And that’s the closest thing we get to an argument in favor of arbitrariness. It’s actually not a bad point, in many contexts. The counterargument usually has to boil down to hope — to a sense of “I bet we can do better.”
Interesting post. There is a psychological construct called maximization of potential relevance.
https://en.wikipedia.org/wiki/Maximization_(psychology)
It seems like one way out of this might be to place value on exploration and variety? Rather than always choosing your favorite song or favorite dish, we alternate between different familiar dishes and sometimes try new ones. You can optimize this as an explore/exploit tradeoff by putting more weight on things you haven’t done recently or at all, without saying it’s entirely arbitrary. However, the optimal balancing point between exploring and exploiting may not always be worth calculating.
A better and more articulate defense of “arbitrariness” (in many contexts) amounts to “your optimization metrics are bad, and I think it will be harder to get you to change them than it will to get you to leave me the hell alone.”
One source of freedom is willingness to take risks. Even if you are a hedge fund manager, you are not just supposed to make as much money as you can. You have to think about the chances of failure too. That makes me wonder how people who are pro optimization agree on what risks to take.
I expect these categories to be actively unhelpful for making decisions in good faith, and don’t trust any decisionmaking process or explanation that uses them.
Sometimes a collective or centralized decisionmaking process will want to delegate some decisions to people with local information. Other times it will seem to the central decisionmaker like the value of constraining behavior is greater than the cost of forgoing the benefit of local information. And yet other times, the participants in a shared arrangement will worry that central decisionmakers are insufficiently aligned with their interests, and will want to constrain the central decisionmakers’ power.
None of these is well described as a move towards or away from “optimization” or “arbitrariness” – what changes is who’s in control of what and whom, and how that control is delegated or shared.
A lot of supposedly “perverse” or “arbitrary” behavior makes perfect sense as a way to test whether, or credibly signal that, others who might constrain your behavior are restrained from doing so. This information is important for planning and coordination, for the same reason that testing the constraints of physical reality is important. There is an optimal level of testing, and an optimal level of precision with which to estimate the optimal level of testing, for any coherent set of values. Again the arbitrary vs optimized dichotomy just doesn’t help explain what’s going on.
Likewise, confusing behavior is costly (since it’s ipso facto not clear action towards something you want), and you don’t want to pay more of that cost than necessary to throw off your enemies or potential exploiters.
Just wanted to note that I think the idea of arbitrariness presented here was insightful / helpful and seemed like the kind of thing that would provide a useful frame for thinking about certain kinds of situations.
What I do think this is helpful for is tracking some frame wars; people are often waving the flag of optimization or the opposed flag of negative freedom.
Social movements organized around the idea of optimization are a real thing, but the description is a lie – what they’re actually organized around is legibility, i.e. optimization from the perspective of and under the explicit control of centralized or otherwise standardized decisionmaking processes.
On rereading the post, I think this comment was just restating the last paragraph of the post. Oops, sorry!
Being shielded from competition along a certain set of dimensions allows play and potentially innovation in those dimensions. This is a vitally important function for breaking out of inadequate equilibria in those dimensions. The molochian horror is the dissolution of all shield generating activities thus getting us to permanent path dependence lock-in. Interesting thought: Ra is sometimes a pretty good shield, as it gives those who have had good ideas in the past the freedom to continue to play.
You observe a correlation between early vs late stage organizations / entities / societies and optimization vs arbitrariness. I’m not sure the extent to which that is actually true. I think 1) your sample / examples of corporates may not be representative, 2) your sample of culture and art may not be representative, 3) there may be a systematic survivorship bias type effect in the thinking here.
-While startups may seem more transparent than large companies, large companies tend to be much more process oriented and legible than mid-sized companies. And in fact, it might be precisely this shift toward higher legibility which allows more application of optimization power that enables the shift to large company status or determines which companies are successful at it when they try to grow.
Keep in mind that the examples of companies that leap to the mind of someone living in the bay area are likely not representative of the broader sample of companies. Most fortune 500 companies were never “startups” in the bay area sense, they were mid-sized equipment suppliers that moved to global supply, or regional groceries that expanded nationally, or financial institution focused on one city / state region that expanded to national or global prominence. And I think they typically made that transition via a deliberate focus on improving optimization and legibility.
-I know a lot less about art than I do about corporate governance, but I observe all the examples from art you use are from classical southern Europe (Greece and Rome / Italy) that’s a pretty small part of Europe which is, in turn, a pretty small part of the world. Doesn’t mean anything is wrong, but it’s the kind of thing that makes me have lower confidence. For example, does later stage Japanese screen printing have more or less arbitrariness and abstraction than early stage? Who knows?
-It also occurs to me that there may be something like the opposite of survivorship bias at play in how we think of these things. Looking backwards, we think of things as late stage because they ended or show problems. If they ended because of a lack of optimization power, then we will observe that their state immediately prior to the end didn’t seem focused on optimizing. For example, suppose some Chinese imperial dynasty comes into power and rules for 1400 years and over time, it builds optimization power and legibility, hitting peak optimization power 1300 years into the reign. Then in year 1345 a new emperor shifts to a more arbitrary style with less meritocratic advisors and more politics and the country is overrun by Mongolians 50 years later. The last state seems “late stage” to us because it was last, but it’s hard to describe any system which has existed for 1300 years as not being late stage. If we were looking at the empire in even the 900th year of the dynasty, it would seem obviously late stage to us.
I think there’s a finer distinction to be made here, between “arbitrariness” as a game-theoretic concept (“my needs can’t be compared to your needs, therefore life is adversarial”) and “arbitrariness” as an information-limitation concept (“the evidence that teaching my child a second language is vague at best, but it can’t hurt and seems intuitively useful”). Both of these types of things resist quantification (and sometimes bleed into each other), but encouraging the first kind of reasoning is “bad” while encouraging the second is “good”.
I have a philosophical preference for “messiness”. Say I am comparing two paradigms of ethics or economics, A and B. Say A is “free markets are good and government interventions are bad” and B is “free markets are bad and governments have a responsibility to manage agriculture”. Obviously “all A” and “all B” are bad: “all A” leads to things like the Irish famine (https://link.springer.com/article/10.1007/BF02746430) and B leads to inefficiency and famine as well: https://en.wikipedia.org/wiki/Virgin_Lands_Campaign. Almost any balance of A and B is better than A or B themselves. But while the market debate is something lots and lots of people have thought about from both points of view, there are still many paradigms that no one has considered, or where the current consensus may yet shift, and where more “mess” is useful as either insurance (to prevent a disaster situation arising from “all A”) or idea generation. Almost by definition this can’t be quantified: unless there is some arbitrariness in how decisions get made, a new paradigm can never take off. However you can try to quantify them by assuming that “basically good but possibly useless” things (such as micronutrients) often follow a sort of logarithmic (or at least highly nonlinear) utility function. Under such a distribution, even if A should be weighed as 1/4 of B, having “0.0001 A all 0.9999 B” has utility which, because we’ve neglected A so much, is worse than having 50% A and 50% B. The optimum utility would be 1/5 A and 4/5 B, though the utility function would be extremely close to the maximum for any combination that looks intuitively like “mostly B but also some amount of A” (the exact weights matter little because of Fermat’s maximum principle, https://en.wikipedia.org/wiki/Fermat%27s_theorem_(stationary_points), which is one of my favorite principles to apply in life).
I think an advantage of modernity is that (except in any heavily politicized issue) we have over time developed more “mess”: more points of view, some of which have stuck around. A lot of civil rights started as “mess” (e.g. gay and trans people started as seedy and heavily alternative). This isn’t to say that in any big debate with two options (like “more freedom vs. more optimization” the best balance is “50/50” or “status quo” — this is an important debate where some amount of optimization is necessary. But if you have 100 different possible points of view, it’s incredibly destructive to say “point of view A is the most optimal one and we’ll ignore all others”. I think a lot of very idealistic people on the “optimization” side of your spectrum make this mistake by not realizing that even if A is objectively the most important of 100 different factors, choosing “all A and nothing else” is orders of magnitude worse (or at least orders of magnitude more risky) than choosing even a random distribution of weights.
“If you always respond automatically to a given stimulus, you have only one choice, and that makes you unfree in the sense of “degrees of freedom.””
This part is contrary to my own intuitions about what *feels* free, which may not be what you are getting at, but I think this sheds light on why optimization doesn’t feel like freedom even given that you can choose what you’re optimizing.
I never more acutely feel constrained than when I am overriding a response to a stimulus. An inappropriate thought pops into my head and I don’t say it. I see a delicious dessert and I don’t eat it because it would be unhealthy. I see a sign saying something is prohibited, which in the “Don’t think of an elephant” fashion causes me to contemplate doing it, but I don’t. (Or maybe you’re this guy and you feel free: https://m.youtube.com/watch?v=R_dExRpUylI )
To feel constrained you must feel conflicted. I find this most powerful when reasons are overriding urges, though I think it is also possible to feel unfree when urges override reasons or when urges conflict with urges or reasons with reasons. Whereas when you act out of habits or ingrained responses that you don’t feel conflicted about you can feel perfectly free while being predictable.
So there is a critique of utilitarianism that says you can’t just add up utilities across distinct individuals. I don’t endorse this as a fatal flaw with utilitarianism, but there is an important point there that when you say the benefit to these people outweighs the harm to those people, those people don’t feel any less harmed. I bring this up because it can apply as well to conflict within a person. If part of me wants to do something (utility=80) and part says I shouldn’t (utility=-100) and I don’t do it, sure, you can describe my behavior as optimizing for something but “I kinda didn’t want to (-20)” doesn’t capture what I felt. I wanted to. And the part of me that wanted to didn’t get to do what it wanted. It was overridden. I don’t feel free, I feel constrained.
I think Balioc has a good point, the same one which kept coming to mind as I was reading this: the distinction between “optimization” and “arbitrariness” might not be so sharp, in that sometimes a person or group will advocate for a form of organization along arbitrariness lines not because they value arbitrariness but because they value optimization by a metric which they know others don’t, and won’t, share. This is arbitrariness for the sake of optimization, in a sense. (It is, for instance, the grounds for secularism in many modern Western states. Although there is merit, I think, in abstracting the particular metrics out of optimization for the sake of seeing the patterns in optimization organizations, I don’t think anyone can at the end of the day understand how an optimization system is going function without an account of those metrics, which means it’s unlikely anyone will understand how an arbitrariness system is going to function with an account of the forms of optimization they against which they are reacting.
But I would go further: just as there is arbitrariness for the sake of optimization, so there is optimization for the sake of arbitrariness. This seems to me to be the motive of such things as universal basic income: in order for some areas of a person’s life, or of a society, to be able to function arbitrarily, others must function optimally by some metric to do with freeing up surplus value. I move in mostly leftist circles and that is entirely what I am seeing: the fundamental value is arbitrariness in most areas of life, but optimization in some areas is the practical means of achieving that. Again, the metrics along which one is optimizing is not incidental, even in the abstract.
right, this is why it’s more like a duality than an opposition.
Ah, I had been reading this as an opposition; my apologies.
In machine learning there is a concept of the “learning rate”, usually denoted η. Naively, one might think that the higher this value is, the faster the algorithm will learn. Alas, things are not so simple. The way machine learning algorithms learn is by first picking some solution (usually randomly), checking how good the solution is, and then refining it slightly. This is repeated over and over until the solution is sufficiently accurate. The learning rate, then, refers to how far away from a given solution the algorithm goes looking for the next solution. For low enough learning rates, there is in fact a linear relationship between speed of learning and η. However, this advantage eventually disappears; if the learning rate is too high, then the algorithm might jump right over some local minimum. The learning rate should make be in equilibrium between these two extremes, low enough to not miss anything, high enough to not waste time. However, there is another issue associated with a learning rate too low. The space of possible guesses might contain any number of local minimums. The algorithm should ideally find the global minimum, the lowest of all the minimums. The way our described algorithm operates, it might find one minimum, then stay there forever, being too scared to lose it. Setting a high learning rate resolves this fear, as the algorithm will happily skip over one minimum and go searching for another one. If it ever stops, it will most likely be near the global minimum, being the point with the strongest pull.
So, this is maybe a little tangential, but there’s this weird phenomenon (which, yes, I’ve talked about a bunch elsewhere 😛 ) where, like — OK so let’s consider, like, means of influencing another person, right? There’s this weird phenomenon where, like, some means of influenced are experienced as an impingement on one’s freedom, and some aren’t, even though there doesn’t seem to be like, any simple criterion I can find that differentiates the two. So there’s some odd thing going on where the internal feeling of freedom seems to be quite a different thing than any of the things we might usually mean by the word? IDK.