Sunday, December 21, 2025

For the Minimalist: World's Best, Lightest, Smallest, and Cheapest Money Clip












Pictured above, its stated purpose according to the box is to be a 1" x 5/8" paper clip.

I learned of its existence many years ago in an article about Procter & Gamble, in which the writer noted that this particular paper clip was chosen by P&G for use throughout its corporate headquarters in Cincinnati because of its ability to hold securely up to 20 sheets of paper.

I repurposed it as a money clip and have found it superior to all others for the following reasons:

1) Cost — at $9.01 for 2 boxes of 100 (that's 4.5 cents apiece, in case you can't find your calculator), you can afford a lifetime supply and still give one to everyone you know

2) Size — you don't even notice its presence

3) Weight — 0.8 grams (0.03 oz.)

4) Functionality — there's a reason P&G chose it: because it works

5) Cool factor — there's nothing like it anywhere at any price












Why pay more?

Saturday, December 20, 2025

Why isn't food priced according to its 'sell-by' date?









Tim Harford, in his "Dear Economist" column in the Financial Times, explored this interesting subject; his thoughts follow.

    Dear Economist

    Q. When purchasing perishable food items I look for those that have the longest "use by" date, even if I intend to consume them immediately. As a result I often bypass items that will be within their "use by" date when I intend consuming them, in preference for items with an even longer shelf life. Can I be accused of being wasteful by not purchasing items with the shortest acceptable shelf life, since I am increasing the likelihood that they remain unsold?

    A. I hardly think the blame can be laid at your doorstep. The fault, instead, is with the unimaginatively static pricing on the part of the food retailers. They are presenting you with two different products at the same price, and you are simply choosing the better, fresher offering.

    It is true that if you plan to eat the food immediately, the value you place on the fresher product might be lower than the value to someone who planned to buy it and leave it sitting around for a couple of weeks.

    On the other hand, many people don't check the dates because they don't care. It would be a shame if they got the fresher product at your expense.

    Ideally, then, retailers would adjust their prices to reflect the staleness of the food, with the price declining very slightly over time, before being slashed as the "use by" date approaches. Freshness fetishists like you would gladly pay more, while students, pensioners, and computer programmers would scoop up the cheapest products and scrape off the mold.

    Products would be allocated efficiently according to preferences for freshness. It can only be a matter of time before the supermarkets catch on.

Nonsense — E.M. Cioran

When the ticking of a watch breaks the silence of eternity, arousing you out of serene contemplation, how can you help resenting the absurdity of time, its march into the future, and all the nonsense about evolution and progress? Why go forward, why live in time? The sudden revelation of time at such moments, conferring upon it a crushing preeminence otherwise nonexistent, is the fruit of a strong contempt for life, an unwillingness to go on. If this revelation happens at night, the sensation of unutterable loneliness is added to the absurdity of time, because then, far from the crowd, you face time alone, the two of you caught in an irreducible duality. Time, in this nocturnal desolation, is no longer populated with actions and objects: it becomes an evergrowing nothingness, a dilating void, a threat from beyond. Silence resounds then with a mournful clamor of bells knelling for a dead universe. Only he who has separated time from existence lives this drama: fleeing the latter, he is crushed by the former. And he feels how time, like death, gains ground.

BehindTheMedspeak: CPR — How to save a life without knowing a thing about it

BehindTheMedspeak: CPR — How to save a life without knowing a thing

Survivalpostarrest1_1

Yes, it's a wonderful thing to know and use CPR in an emergency such that an individual who might have otherwise died survives intact.

But let's face it: few people know what to do and even fewer do it when push  — on the chest, hard, 60 times a minute — comes to shove.

So here's bookofjoe's tip that might well let you — uninformed, unschooled, and scared — help someone live.

1. Bend down 

2. Pick up their feet 

3. Hold their legs in the air, waist-high

That's it.

By doing this you increase blood return to the central circulation — the heart, which needs to fill in order to pump blood and recover spontaneous heartbeat — from the periphery, where it's irrelevant to survival in a circulatory crisis.

Elevating the legs as described above is the equivalent of transfusing two units of blood — 40% of the average adult's total circulating blood volume — and makes CPR much more effective.

This is the first thing I instruct someone to do at the scene of a cardiac arrest.

Friday, December 19, 2025

Dynamic Pong Wars

The eternal battle between day and night, good and bad. 

Written in JavaScript with some HTML & CSS in one index.html.

Fair warning: there goes the day.

Hacksaw Hack

Hhihlihlihj

How much wood could a woodchuck chuck if a woodchuck could chuck wood chuck chuck?

I thought you'd never ask.

But I digress.

World-Class Hacksaw Hack

I happened on this in an early issue of Make Magazine, c. 2007.

It was in Mister Jalopy's "Blast From The Past" feature, which offered "Old-School Hand Tool Hacks: What I Learned From The 1963 Bureau of Naval Personnel Training Course."

Among the many useful and interesting tips this one stood out: it made me swoon with delight at 1) its simplicity, and 2) its obviousness — after the fact.

No more hacksaw proceedings truncated prematurely by throat-size limitation issues.

If you don't find this hack helpful, let me know and I'll cheerfully refund 3x what you paid for it.

201 Stories by Anton Chekhov






















Here, in the order of their publication in Russia between 1882 and 1904, the year he died from tuberculosis at age 44.

The complete stories were translated and published in 13 volumes between 1916 and 1922 by Constance Garnett, who stated, "I regret that it is impossible to obtain the necessary information for a chronological list."

That was then, this is now.

A thirteen-volume set of all 201 stories was published by Ecco Press in 1984; late in 2006, to coincide with its own thirty-fifth anniversary, Ecco republished the thirteen volumes in a boxed set.

Want one?

A snip at $899.99.

Mona Simpson wrote about the collected stories in the Atlantic magazine; her review follows.

    Tales of Chekhov

    In 1984, Daniel Halpern, founder of the Ecco Press, began republishing all 201 of the Constance Garnett translations of Anton Chekhov's stories. Since then, the thirteen resulting volumes have become a contemporary staple for the library of any serious reader. (I think I've purchased five whole sets in those twenty-odd years — several as wedding presents, one as a baby present, one remains in my study.) The price of around $8.50 per volume (which would total $110 for the series) represented a tremendous bargain for the most comprehensive collection of Chekhov stories in what is still the best complete translation available in English. Late in 2006, to coincide with its own thirty-fifth anniversary, Ecco republished the thirteen volumes in a handsome boxed set. After twenty years, the price has climbed only to $150.

    Chekhov is a master at making his characters' darkest aspects comprehensible and human. He's never sentimental and he's not particularly pleasant, but he will always feel modern because of his astonishing juxtapositions and the way his characters' swift, darting minds vacillate between idealism and boredom, vanity and hope. His narrator has a keen vision of class anger, resentment, and envy. Although less enchanted by his own characters than was Tolstoy, Chekhov acutely portrays largeheartedness.

    Given that all of the Chekhov stories translated by Garnett can be downloaded for free (James Rusk made them available at chekhov2 .tripod.com), Ecco might be wise to assemble the books in durable hardback; they will always find a market.

....................

FunFact: Mona Simpson is Steve Jobs's sister. They only learned they were kin as adults, long after Jobs had been put up for adoption by his unmarried mother one week after his birth.

Thursday, December 18, 2025

Helpful Hints from joeeze: How to make a door stay part-way open

 Hbhhhh

Gene Austin's "Do It Yourself" feature in the Philadelphia Inquirer spelled it out:

Q. One of our interior doors wants to open all the way when it is not latched. I would like to have it part-way open. Any suggestions?

A. The door is probably slightly out of plumb. A simple way to fix it is to remove one of the hinge pins, prop the ends of the pin on two pieces of wood, and strike the pin sharply in the middle with a hammer. The idea is to make a very slight bend in the pin so it will fit more tightly in the hinge. The friction of the bent pin will hold the door in any position.

'Are You Living in a Computer Simulation?'





























Nick Bostrom, a Swedish philosopher at Oxford, thinks it is highly probable that all of us are mere computer simulations. In other words, he thinks that the science-fiction film "The Matrix" may well be fact and not fiction.

The abstract of Bostrom's foundational paper, "Are You Living in a Computer Simulation?", which appeared in Philosophical Quarterly in 2003, follows.

    Abstract

    This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a "posthuman" stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

....................

But perhaps you prefer the full monty.

Okay, then: here's the paper in its entirety.

    Are You Living in a Computer Simulation?

    I. Introduction

    Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this paper will spell it out more carefully.

    Apart from the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which some may find amusing or thought-provoking.

    The structure of the paper is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.


    II. The Assumption of Substrate-Independence

    A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is nor an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

    Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

    The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) – just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

    Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).


    III. The Technological Limits of Computation

    At our current stage of technological development, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Some authors argue that this stage may be only a few decades away. Yet present purposes require no assumptions about the time-scale. The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a "posthuman" stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.

    Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. As we are still lacking a "theory of everything", we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those constraints that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter. We can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10^21 instructions per second. Another author gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet. (If we could create quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits.

    Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits.[5] However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain. An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~10^16-10^17 operations per second. Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dentritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.

    Memory seems to be a no more stringent constraint than processing power. Moreover, since the maximum human sensory bandwidth is ~10^8 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.

    If the environment is included in the simulation, this will require additional computing power — how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed — only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft.

    On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.

    Moreover, a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.

    It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resides in simulating organic brains down to the neuronal or sub-neuronal level. While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10^33 - 10^36 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument.

    We noted that a rough approximation of the computational power of a planetary-mass computer is 10^42 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates:

    Posthuman civilizations would have enough computing power to run hugely many ancestor-simulations even while using only a tiny fraction of their resources for that purpose.


    Cccchlkjl


    V. A Bland Indifference Principle

    We can take a further step and conclude that conditional on the truth of (3), one's credence in the hypothesis that one is in a simulation should be close to unity. More generally, if we knew that a fraction x of all observers with human-type experiences live in simulations, and we don’t have any information that indicate that our own particular experiences are any more or less likely than other human-type experiences to have been implemented in vivo rather than in machina, then our credence that we are in a simulation should equal x:

    Bbbbhb(#)

    This step is sanctioned by a very weak indifference principle. Let us distinguish two cases. The first case, which is the easiest, is where all the minds in question are like your own in the sense that they are exactly qualitatively identical to yours: they have exactly the same information and the same experiences that you have. The second case is where the minds are "like" each other only in the loose sense of being the sort of minds that are typical of human creatures, but they are qualitatively distinct from one another and each has a distinct set of experiences. I maintain that even in the latter case, where the minds are qualitatively different, the simulation argument still works, provided that you have no information that bears on the question of which of the various minds are simulated and which are implemented biologically.

    A detailed defense of a stronger principle, which implies the above stance for both cases as trivial special instances, has been given in the literature. Space does not permit a recapitulation of that defense here, but we can bring out one of the underlying intuitions by bringing to our attention to an analogous situation of a more familiar kind. Suppose that x% of the population has a certain genetic sequence S within the part of their DNA commonly designated as "junk DNA". Suppose, further, that there are no manifestations of S (short of what would turn up in a gene assay) and that there are no known correlations between having S and any observable characteristic. Then, quite clearly, unless you have had your DNA sequenced, it is rational to assign a credence of x% to the hypothesis that you have S. And this is so quite irrespective of the fact that the people who have S have qualitatively different minds and experiences from the people who don’t have S. (They are different simply because all humans have different experiences from one another, not because of any known link between S and what kind of experiences one has.)

    The same reasoning holds if S is not the property of having a certain genetic sequence but instead the property of being in a simulation, assuming only that we have no information that enables us to predict any differences between the experiences of simulated minds and those of the original biological minds.

    It should be stressed that the bland indifference principle expressed by (#) prescribes indifference only between hypotheses about which observer you are, when you have no information about which of these observers you are. It does not in general prescribe indifference between hypotheses when you lack specific information about which of the hypotheses is true. In contrast to Laplacean and other more ambitious principles of indifference, it is therefore immune to Bertrand's paradox and similar predicaments that tend to plague indifference principles of unrestricted scope.

    Readers familiar with the Doomsday argument may worry that the bland principle of indifference invoked here is the same assumption that is responsible for getting the Doomsday argument off the ground, and that the counterintuitiveness of some of the implications of the latter incriminates or casts doubt on the validity of the former. This is not so. The Doomsday argument rests on a much stronger and more controversial premiss, namely that one should reason as if one were a random sample from the set of all people who will ever have lived (past, present, and future) even though we know that we are living in the early twenty-first century rather than at some point in the distant past or the future. The bland indifference principle, by contrast, applies only to cases where we have no information about which group of people we belong to.

    If betting odds provide some guidance to rational belief, it may also be worth to ponder that if everybody were to place a bet on whether they are in a simulation or not, then if people use the bland principle of indifference, and consequently place their money on being in a simulation if they know that that's where almost all people are, then almost everyone will win their bets. If they bet on not being in a simulation, then almost everyone will lose. It seems better that the bland indifference principle be heeded.

    Further, one can consider a sequence of possible situations in which an increasing fraction of all people live in simulations: 98%, 99%, 99.9%, 99.9999%, and so on. As one approaches the limiting case in which everybody is in a simulation (from which one can deductively infer that one is in a simulation oneself), it is plausible to require that the credence one assigns to being in a simulation gradually approach the limiting case of complete certainty in a matching manner.


    VI. Interpretation

    The possibility represented by proposition (1) is fairly straightforward. If (1) is true, then humankind will almost certainly fail to reach a posthuman level; for virtually no species at our level of development become posthuman, and it is hard to see any justification for thinking that our own species will be especially privileged or protected from future disasters. Conditional on (1), therefore, we must give a high credence to DOOM, the hypothesis that humankind will go extinct before reaching a posthuman level:

    One can imagine hypothetical situations were we have such evidence as would trump knowledge of . For example, if we discovered that we were about to be hit by a giant meteor, this might suggest that we had been exceptionally unlucky. We could then assign a credence to DOOM larger than our expectation of the fraction of human-level civilizations that fail to reach posthumanity. In the actual case, however, we seem to lack evidence for thinking that we are special in this regard, for better or worse.

    Proposition (1) doesn’t by itself imply that we are likely to go extinct soon, only that we are unlikely to reach a posthuman stage. This possibility is compatible with us remaining at, or somewhat above, our current level of technological development for a long time before going extinct. Another way for (1) to be true is if it is likely that technological civilization will collapse. Primitive human societies might then remain on Earth indefinitely.

    There are many ways in which humanity could become extinct before reaching posthumanity. Perhaps the most natural interpretation of (1) is that we are likely to go extinct as a result of the development of some powerful but dangerous technology. One candidate is molecular nanotechnology, which in its mature stage would enable the construction of self-replicating nanobots capable of feeding on dirt and organic matter — a kind of mechanical bacteria. Such nanobots, designed for malicious ends, could cause the extinction of all life on our planet.

    The second alternative in the simulation argument's conclusion is that the fraction of posthuman civilizations that are interested in running ancestor-simulation is negligibly small. In order for (2) to be true, there must be a strong convergence among the courses of advanced civilizations. If the number of ancestor-simulations created by the interested civilizations is extremely large, the rarity of such civilizations must be correspondingly extreme. Virtually no posthuman civilizations decide to use their resources to run large numbers of ancestor-simulations. Furthermore, virtually all posthuman civilizations lack individuals who have sufficient resources and interest to run ancestor-simulations; or else they have reliably enforced laws that prevent such individuals from acting on their desires.

    What force could bring about such convergence? One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.Another possible convergence point is that almost all individual posthumans in virtually all posthuman civilizations develop in a direction where they lose their desires to run ancestor-simulations. This would require significant changes to the motivations driving their human predecessors, for there are certainly many humans who would like to run ancestor-simulations if they could afford to do so.

    But perhaps many of our human desires will be regarded as silly by anyone who becomes a posthuman. Maybe the scientific value of ancestor-simulations to a posthuman civilization is negligible (which is not too implausible given its unfathomable intellectual superiority), and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure — which can be obtained much more cheaply by direct stimulation of the brain’s reward centers. One conclusion that follows from (2) is that posthuman societies will be very different from human societies: they will not contain relatively wealthy independent agents who have the full gamut of human-like desires and are free to act on them.

    The possibility expressed by alternative (3) is the conceptually most intriguing one. If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence. The physics in the universe where the computer is situated that is running the simulation may or may not resemble the physics of the world that we observe. While the world we see is in some sense "real", it is not located at the fundamental level of reality.

    It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be "virtual machines", a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine — a simulated computer — inside your desktop.) Virtual machines can be stacked: it's possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration. If we do go on to create our own ancestor-simulations, this would be strong evidence against (1) and (2), and we would therefore have to conclude that we live in a simulation. Moreover, we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.

    Reality may thus contain many levels. Even if it is necessary for the hierarchy to bottom out at some stage — the metaphysical status of this claim is somewhat obscure — there may be room for a large number of levels of reality, and the number could be increasing over time. (One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman.)

    Although all the elements of such a system can be naturalistic, even physical, it is possible to draw some loose analogies with religious conceptions of the world. In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation: the posthumans created the world we see; they are of superior intelligence; they are "omnipotent" in the sense that they can interfere in the workings of our world even in ways that violate its physical laws; and they are "omniscient" in the sense that they can monitor everything that happens. However, all the demigods except those at the fundamental level of reality are subject to sanctions by the more powerful gods living at lower levels.

    Further rumination on these themes could climax in a naturalistic theogony that would study the structure of this hierarchy, and the constraints imposed on its inhabitants by the possibility that their actions on their own level may affect the treatment they receive from dwellers of deeper levels. For example, if nobody can be sure that they are at the basement-level, then everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators. An afterlife would be a real possibility. Because of this fundamental uncertainty, even the basement civilization may have a reason to behave ethically. The fact that it has such a reason for moral behavior would of course add to everybody else's reason for behaving morally, and so on, in truly virtuous circle. One might get a kind of universal ethical imperative, which it would be in everybody's self-interest to obey, as it were "from nowhere".

    In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or "shadow-people" — humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience.

    Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations. There would have to be about 100 billion times as many "me-simulations" (simulations of the life of only a single mind) as there are ancestor-simulations in order for most simulated persons to be in me-simulations.

    There is also the possibility of simulators abridging certain parts of the mental lives of simulated beings and giving them false memories of the sort of experiences that they would typically have had during the omitted interval. If so, one can consider the following (farfetched) solution to the problem of evil: that there is no suffering in the world and all memories of suffering are illusions. Of course, this hypothesis can be seriously entertained only at those times when you are not currently suffering.

    Supposing we live in a simulation, what are the implications for us humans? The foregoing remarks notwithstanding, the implications are not all that radical. Our best guide to how our posthuman creators have chosen to set up our world is the standard empirical study of the universe we see. The revisions to most parts of our belief networks would be rather slight and subtle — in proportion to our lack of confidence in our ability to understand the ways of posthumans. Properly understood, therefore, the truth of (3) should have no tendency to make us "go crazy" or to prevent us from going about our business and making plans and predictions for tomorrow. The chief empirical importance of (3) at the current time seems to lie in its role in the tripartite conclusion established above. We may hope that (3) is true since that would decrease the probability of (1), although if computational constraints make it likely that simulators would terminate a simulation before it reaches a posthuman level, then out best hope would be that (2) is true.

    If we learn more about posthuman motivations and resource constraints, maybe as a result of developing towards becoming posthumans ourselves, then the hypothesis that we are simulated will come to have a much richer set of empirical implications.


    VII. Conclusion

    A technologically mature "posthuman" civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

    If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).

    Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.

Confessions of a Book Abuser — Ben Schott



















Schott, the creater of "Schott's Miscellanies" and "Schott's Almanac" series, hit home with a New York Times Book Review back page essay.

Man, did he make me wince.

Tearing books apart and throwing away the pages I've read; turning down (or up) page corners to mark passages of interest or my place; throwing them away; he nailed me.

Here's his most entertaining essay, which will undoubtedly strike a chord (or three) in the conscience of any inveterate reader.

    Confessions of a Book Abuser

    I have to admit I was flattered when, returning to my hotel room on the shores of Lake Como, a beautiful Italian chambermaid took my hand. I knew that the hotel was noted for the attentiveness of its staff. Surely, though, such boldness elevated room service to a new level. Escorting me to the edge of the crisply made bed, the chambermaid pointed to a book on my bedside table. "Does this belong to you?" she asked. I looked down to see a dog-eared copy of Evelyn Waugh’s "Vile Bodies" open spread-eagle, its cracked spine facing out. "Yes," I replied. "Sir, that is no way to treat a book!" she declared, stalking out of the room.

    I appreciate the chambermaid's point of view — and I admire how she expressed it. Yet I profoundly disagree. While the ideas expressed in even the vilest of books are worthy of protection, I find it difficult to respect books as objects, and see no harm whatsoever in abusing them.

    There are, of course, some important exceptions: rare books or those of historical interest, books with fine binding or elegant illustrations, unpurchased books in bookshops, and books belonging to other people or to libraries. All of these I treat with a care and consideration that I would not dream of bestowing on the average mass-produced paperback. Once a book is mine, I see no reason to read it with kid gloves. And if you have ever seen a printing press disgorge best sellers at 20,000 copies an hour, you might be tempted to agree. It is the content of books that counts, not the books themselves — no matter how well they furnish a room.

    Indeed, the ability of books to survive abuse is one of the reasons they are such remarkable objects, elevated far beyond, say, Web sites. One cannot borrow a Web site from a friend and not return it for years. One cannot, yet, fold a Web site into one's back pocket, nor drop a Web site into the bath. One cannot write comments, corrections or shopping lists on Web sites only to rediscover them (indecipherable) years later. One cannot besmear a Web site with suntan-lotioned fingers, nor lodge sand between its pages. One cannot secure a wobbly table with a slim Web site, nor use one to crush an unsuspecting mosquito. And, one cannot hurl a Web site against a wall in outrage, horror or ennui. Many chefs I know could relive their culinary triumphs by licking the food-splattered pages of their favorite cookbooks. Try doing that with a flat-screen monitor.

    All of these strike me as utterly reasonable fates for a book, even though (and perhaps because) they would horrify a biblioprude and befuddle a Web monkey.

    The most rococo act of book abuse is something I have performed only once — and it is a great deal more difficult than countless movies would have one believe. To excavate a hiding place for valuables within the pages of a thick book takes a sharp scalpel, a strong arm and a surprising amount of patience. I had hoped to cut a hole with the exact outline of the object to be hidden — not, sadly, a revolver, but something equally asymmetrical. However, slicing page after page with uniform precision proved beyond me, and all I could manage to gouge was a rather forlorn rectangle. (There are some who would tempt fate by stashing their baubles within "Great Expectations" or "Treasure Island." I played safe with "Pride and Prejudice," since I had never gotten much further than its eminently quotable first line.)

    I also enthusiastically turn down the pages of books as I read them — so much so that I have developed a personal dog-earing code: folding a top corner marks a temporary page position, while folding a bottom corner marks a page that might be worth revisiting. In both cases, the tip of the fold points toward the relevant passage. Of course, this could be achieved with a ribbon or a bookmark; but so many books are bereft of ribbons, and I have always thought there is something ever so slightly shifty about those who always have a bookmark on hand.

    My favorite act of abuse is writing in books — and, in this at least, I follow in illustrious footsteps. Mathematics would be considerably poorer were it not for the marginalia of Pierre de Fermat, who in 1637 jotted in his copy of the “Arithmetica” of Diophantus, "I have a truly marvelous proof of this proposition that this margin is too narrow to contain." This casual act of vandalism kept mathematicians out of trouble for 358 years. (Andrew Wiles finally proved Fermat's Last Theorem in 1995.)

    Libraries have an ambivalent attitude to marginalia. On the one hand, they quite properly object to people defacing their property. Cambridge University Library has a chamber of horrors displaying "marginalia and other crimes," including damage done by "animals, small children and birds," not to mention the far from innocuous Post-it note. On the other hand, libraries cannot suppress a flush of pride on acquiring an ancient text "annotated" by someone famous. Like graffiti, marginalia acquire respectability through age (and, sometimes, wit).

    While I take great delight in marking significant passages, jotting down notes and even doodling in my books, I do draw the line at highlighter pens. One of my schoolmates used to insist on marking the passages he needed to review with a fluorescent pink highlighter. It was gently suggested that, since swaths of his textbooks were smothered in pink, it might be easier to highlight the areas he didn't need to remember. He should have taken this advice, since the pink glop reacted badly with one particularly porous textbook, dissolving all of the type it touched and leaving legible only the irrelevant passages.

    I am not unaware that the abuse of books has a dark and dishonorable past. Books have been banned and burned and writers tortured and imprisoned since the earliest days of publishing. While one thinks of such historical nadirs as Savonarola’s "bonfire of the vanities" and the Nazi pyres of "un-German" and "degenerate" books, the American Library Association warns that we still live in an era of book burning. Perhaps inevitably, J. K. Rowling's boy wizard is the target of much modern immolation. One group in Lewiston, Maine, when denied permission for a pyre by the local fire department, held a "book cutting" of "Harry Potter and the Sorcerer's Stone" instead.

    To destroy a book because of its content or the identity of its author is a despicable strangulation of thought. But such acts are utterly distinct from the personal abuse of a book — and there is no "slippery slope" between the two. The businessman who tears off and discards the chunk of John Grisham he has already read before boarding a plane may lack finesse, but he is not a Nazi. Indeed, the publishing industry thinks nothing of pulping millions of unsold (or libelous) books each year. And there was no outcry in 2003 when 2.5 million romance novels from the publisher Mills & Boon were buried to form the noise-reducing foundation of a motorway extension in Manchester, England.

    It is notable that those who abuse their own books through manhandling or marginalia are often those who love books best. And surely the dystopia of "Fahrenheit 451" is more likely avoided through the loving abuse of books than through their sterile reverence. Not that I expect the chambermaid to agree.

....................

My favorite book-related quotation is from Ambrose Bierce, who remarked, "Never loan books to anyone. The only books I have in my library are those I've borrowed from someone else."

w00t!

Wednesday, December 17, 2025

James Altucher will sort you out























I've long been a fan of this sui generis man, ever since I happened on his weekly column in the Financial Times decades ago.

He shared how he decides to say "Yes" or "No" to opportunities:

...................................................

"Two out of these three have to trigger for me to say "Yes":

1. KNOWLEDGE: Will I learn something?

2. FUN: Is is fun?

3. MONEY: Is it financially worthwhile?

He says "No" a lot more than "Yes."

...................................................

Bonus Altucher which I have used to great effect ever since I happened on it:

"The best way to decline a request is to simply say these four words: 'I can't do it.'"

All Hearts Are Not Created Equal























Who knew?

Take a deep dive here.

Depths of Wikipedia


















College student Annie Rauwerda launched Depths of Wikipedia in 2020.

It's an Instagram account of unusual and unexpected Wikipedia entries.

Talk about going down the rabbit hole....

Fair warning: there goes the day!

Wait a sec — what's that song I'm hearing?

Tuesday, December 16, 2025

'Kolymsky Heights'























I'm about a third of the way through this 1995 spy thriller by Lionel Davidson, having first encountered it about 10-15 years ago.

As is par for the course these days, I can't recall a single thing about the book other than that I thought it was great.

This time around I happened on a review of it by Philip Pullman (in red up top): "The best thriller I've ever read."

That stopped me cold.

I read a few more reviews and then bought it for a second reading.

No way could I enjoy a book more: this one's got everything.

Rather than my usual weeks, this time it'll be days to finish it but oh what fun days!

'Ludwig Aria' — Diego Marcon

More here and here.

Life Hack: How to decide quickly whether or not to purchase a book

Toomanybooks

"Apply the 'McLuhan test,' which is to read page 69 and, if you like it, buy the book." — Susan Elderkin writing in the Financial Times.

Monday, December 15, 2025

Facing Silence - E.M. Cioran













Once you have come to set great store by silence, you have hit upon a fundamental expression of life in the margins. The reverence for silence of great solitaries and founders of religions has far deeper roots than we think. Men's presence must have been unendurable and their complex problems disgusting for one not to care about anything except silence. Chronic fatigue predisposes to a love of silence, for in it words lose their meaning and strike the ear with the hollow sonority of mechanical hammers; concepts weaken, expressions lose their force, the word grows barren as the wilderness.

The ebb and flow of the outside is like a distant monotonous murmur unable to stir interest or curiosity. Then you will think it useless to express an opinion, take a stand, to make an impression; the noises you have renounced increase the anxiety of your soul. After having struggled madly to solve all problems, after having suffered on the heights of despair, in the supreme hour of revelation you will find that the only answer, the only reality, is silence.