-
-
Introduction
-
Coarse graining Alice and Dinah
-
Coarse graining part I - Clustering algorithms
-
Coarse graining part II - Entropy
-
-
-
Markov Chains
-
Mathematics of coarse grained Markov chains
-
Mathematics of Coarse grained Markov Chains: The General Case
-
A puzzle: origin of the slippy counter
-
Where we are so far
-
-
-
Cellular Automata: Introduction
-
Israeli and Goldenfeld; projection and commuting diagrams
-
Networks of Renormalization
-
-
-
Fixing a projection: From CA’s to Ising
-
Introduction to the Ising Model
-
Coarse-graining the Lattice
-
Inducing Quartets & Commutation Failure
-
Finding Fixed Points
-
Ising Model Simulations
-
-
-
Poking the Creature: An Introduction to Group Theory
-
Irreversible Computations, Forgetful Computers and the Krohn-Rhodes Theorem
-
-
-
From Quantum Electrodynamics to Plasma Physics
-
The Thermal Physics of Plasma
-
How does a particle move the plasma?
-
Charge Renormalization and Feedback
-
-
-
Conclusion: Keeping the things that matter
-
-
7.1 Conclusion: Keeping the things that matter » Quiz Solution
1. What was unusual about the Cellular Automaton renormalization example?
A. we were able to find an exact coarse-graining, where the diagram commuted.
B. the CAs were non-renormalizable.
C. we solved simultaneously for the coarse-graining operation and the model that described the emergent dynamics at that coarse-graining.
D. in the middle of the video, Stephen Wolfram walked across the screen wearing a Gorilla suit.
Answer: (C). In all the other cases, we specified ahead of time the coarse-graining we wanted to do, and then tried to figure out what happened to the model. In the CA case, we put some restrictions on the coarse-graining (eliminating the trivial ones), but otherwise left things free. (B) is interesting: in some cases, Israeli and Goldenfeld could not find a g and P pair that worked on a particular scale. This does imply that some of the rules are non-renormalizable. But many are renormalizable, and for every model, at sufficiently large coarse-graining scales, it turned out they could. Meanwhile, (A) we've seen a couple of times -- in the Markov Chain renormalization, as well as the Krohn-Rhodes theorem example (but not, for example, in the Ising model case).
2. What does rate-distortion theory do for you?
A. it tells you how to trade off the cost of missing a fine-grained feature when you coarse-grain your data (on the one hand) with the benefits you get from the lower cost of information gathering required by the coarse-graining function, given a conversion factor between the two costs.
B. it specifies a unique coarse-graining given a particular utility function.
C. it eliminates the need for a coarse-graining operation, by replacing it with a utility function.
D. it measures the extent to which a coarse-graining produces an overly complex model.
Answer (A). The closest answer to the correct one is (B). However, while it's true that rate-distortion theory requires that you specify a utility function ("distortion function"), that's not enough to specify a coarse-graining -- you also have to specify the tradeoff parameter beta, that tells you how expensive it is to gather information relative to how expensive it is to make mistakes.