top of page
Search

LIFE 3.0-Max Tegmark Notes

Updated: Jun 24, 2022

*This book is really interesting and well suited for a beginner (to physics non-fictions). I have not written the hereunder and these are the notes from the book. This blog does not contain any "spoilers" because its a non-fiction. You can absolutely read the blog and then decided it you want to read the book or not. If you read the book, then you can refer back here if you forget something. I have jotted this down because these are really interesting facts that I like to revise. If you are not a reader but still want to know what an interesting book like this contains, then you must read. Do share your thoughts in the contact or comment section*



Basis starts as follows




When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information. In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

“Life 1.0”: life where both the hardware and software are evolved rather than designed.

“Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, author means all the algorithms and knowledge that you use to process the information from your senses and decide what to do.

Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been preloaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity.

The AI talk starts here

One popular myth is that we know we’ll get superhuman AGI(Artificial general intelligence ) this century. In fact, history is full of technological over-hyping. Physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs. information can take on a life of its own, independent of its physical substrate!

A computation is a transformation of one memory state into another. In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.

The hardware is the matter and the software is the pattern. This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms.

Quantum computing pioneer David Deutsch controversially argues that “quantum computers share information with huge numbers of versions of themselves throughout the multiverse,” and can get answers faster here in our Universe by in a sense getting help from these other versions.

Deep-learning neural networks (they’re called “deep” if they contain many layers) are much more efficient than shallow ones for many of these functions of interest. David Rolnick showed that the simple task of multiplying n numbers requires a whopping 2n neurons for a network with only one layer, but takes only about 4n neurons in a deep network. NAND gates and neurons are two important examples of such universal “computational atoms.” what the DeepMind team had done. Instead, they’d created a blank-slate AI that knew nothing about this game—or about any other games, or even about concepts such as games, paddles, bricks or balls. All their AI knew was that a long list of numbers got fed into it at regular intervals: the current score and a long list of numbers which we (but not the AI) would recognize as specifications of how different parts of the screen were colored. The AI was simply told to maximize the score by outputting, at regular intervals, numbers which we (but not the AI) would recognize as codes for which keys to press. DeepMind soon published their method and shared their code, explaining that it used a very simple yet powerful idea called deep reinforcement learning.

Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa. There are vastly more possible Go positions than there are atoms in our Universe, which means that trying to analyze all interesting sequences of the future moves rapidly gets hopeless.

_____________________________________________________________________

Some "history" of physics

Heisenberg, and others in the 1920s to rectify the situation. For phenomena involving both large velocities and tiny particles there is a synthesis known as quantum field theory which is still undergoing development. Quantum mechanics, which is the general theoretical framework for describing dynamics. The problem of quantum gravity is simply the fact that the current theories are not capable of describing the quantum behavior of the gravitational field. Einstein’s discovery is that spacetime and the gravitational field are the same physical entity.

Spacetime is a manifestation of a physical field. All fields we know exhibit quantum properties at some scale, therefore we believe space and time to have quantum properties as well. Bronstein’s argument is not just the beginning, it is also the core of quantum gravity. Say you want to measure some field value at a location x.

This is a well-known consequence of Heisenberg uncertainty: sharp location requires large momentum; which is the reason why at CERN high-momentum particles are used to investigate small scales. In turn, large momentum implies large energy E. In the relativistic limit, where rest mass is negligible, E ∼ cp. Sharp localization requires large energy. In GR, any form of energy E acts as a gravitational mass M ∼ E/c2 and distorts spacetime around itself. The distortion increases when energy is concentrated to the point that a black hole forms when a mass M is concentrated in a sphere of radius R ∼ GM/c2, where G is the Newton constant. If we take L arbitrarily small, to get a sharper localization, the concentrated energy will grow to the point where R becomes larger than L. But in this case the region of size L that we wanted to mark will be hidden beyond a black hole horizon, and we lose localization. Therefore we can decrease L only up to a minimum value, which clearly is reached when the horizon radius reaches L, that is when R = L.

*Sorry if these terms bother you, because they (and many other technical things in the book) bothered me too, just give it another slow read and you are good to go*

Combining the relations above, we obtain that the minimal size where we can localize a quantum particle without having it hidden by its own horizon is we find that it is not possible to localize anything with a precision better than the length which is called the Planck length. Well above this length scale, we can treat spacetime as a smooth space. Below, it makes no sense to talk about distance. What happens at this scale is that the quantum fluctuations of the gravitational field, namely the metric, become wide, and spacetime can no longer be viewed as a smooth manifold: anything smaller than L Planck is “hidden inside its own mini-black hole.” In the same way, I(author) suspect that there are simpler ways to build human-level thinking machines than the solution evolution came up with, and even if we one day manage to replicate or upload brains, we’ll end up discovering one of those simpler solutions first. It will probably draw more than the twelve watts of

power that your brain uses, but its engineers won’t be as obsessed about energy efficiency as evolution was—and soon enough, they’ll be able to use their intelligent machines to design more energy-efficient ones. The hardware and electricity costs of running the AI are crucial as well.

The first malware to draw significant media attention was the so-called Morris worm, unleashed on November 2, 1988, which exploited bugs in the UNIX operating system. On May 5, 2000, as if to celebrate my birthday, people got emails with the subject line “ILOVEYOU” from acquaintances and colleagues, and those Microsoft Windows users who clicked on the attachment “LOVE-LETTER-FOR-YOU.txt.vbs” unwittingly launched a script that damaged their computers and resent the email to everyone in their address book. Created by two young programmers in the Philippines, this worm infected about 10% of the internet, just as the Morris worm had done, but because the internet was a lot bigger by then, it became one of the greatest infections of all time, afflicting over 50 million computers and causing over $5 billion in damages.

Sunway TaihuLight, the world’s fastest supercomputer in 2016, whose raw computational power arguably exceeds that of the human brain. After strong machine AI is built, it’s not obvious that cyborgs or uploads will ever be made. If the Neanderthals had had another 100,000 years to evolve and get smarter, things might have turned out great for them—but Homo sapiens never gave them that much time. Freeman’s idea was to rearrange Jupiter into a biosphere in the form of a spherical shell surrounding the Sun, where our descendants could flourish, enjoying 100 billion times more biomass and a trillion times more energy than humanity uses today.

He argued that this was the natural next step: “One should expect that, within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere which completely surrounds its parent star.” If you lived on the inside of a Dyson sphere, there would be no nights: The silver lining is that a Dyson sphere the size of Earth’s current orbit would give us about 500 million times more surface area to live on. simply pouring a teaspoonful of anti-water into regular water would unleash the energy equivalent to 200,000 tons of TNT, the yield of a typical hydrogen bomb—enough to power the world’s entire energy needs for about seven minutes. Today’s nuclear reactors do dramatically better by splitting uranium atoms through fission, but still fail to extract more than 0.08% of their energy. However, even if we enclose the Sun in a perfect Dyson sphere, we’ll never convert more than about 0.08% of the Sun’s mass to energy we can use, because once the Sun has consumed about about a tenth of its hydrogen fuel, it will end its lifetime as a normal star, expand into a red giant, and begin to die.

Hawking famously calculated that quantum gravity effects make a black hole act like a hot object—the smaller, the hotter—that gives off heat radiation now known as Hawking radiation. This means that the black hole gradually loses energy and evaporates away. In other words, whatever matter you dump into the black hole will eventually come back out again as heat radiation, so by the time the black hole has completely evaporated, you’ve converted your matter to radiation with nearly 100% efficiency. A problem with using black hole evaporation as a power source is that, unless the black hole is much smaller than an atom in size, it’s an excruciatingly slow process that takes longer than the present age of our Universe and radiates less energy than a candle. Roger Penrose discovered that if you launch the object at a clever angle and make it split into two pieces, then you can arrange for only one piece to get eaten while the other escapes the black hole with more energy than you started with. the sphaleron process. It can destroy quarks and turn them into leptons: electrons, their heavier cousins the muon and tau particles, neutrinos or

their antiparticles. The standard model of particle physics predicts that nine quarks with appropriate flavor and spin can come together and transform into three leptons through an intermediate state called a sphaleron. Because the input weighs more than the output, the mass difference gets converted into energy according to Einstein’s E = mc formula.

THE UNIVERSE

During the 13.8 billion years since then, a great segregation took place, where atoms became concentrated into galaxies, stars and planets, most photons stayed in intergalactic space, forming the cosmic microwave background radiation that has been used to make baby pictures of our Universe. Wormholes require the existence of a strange hypothetical kind of matter with negative density, whose existence may hinge on poorly understood quantum gravity effects. the Big Chill, the Big Crunch, the Big Rip, the Big Snap and Death Bubbles. Our Universe has now been expanding for about 14 billion years. The Big Chill is when our Universe keeps expanding forever, diluting our cosmos into a cold, dark and ultimately dead place; Big Crunch, where the cosmic expansion is eventually reversed and everything comes crashing back together in a cataclysmic collapse akin to a backward Big Bang. Finally, the Big Rip is like the Big Chill for the impatient, where our galaxies, planets and even atoms get torn apart in a grand finale a finite time from now.

Whereas you or a future Earth-sized supercomputer can have many thoughts per second, a galaxy-sized mind could have only one thought every hundred thousand years, and a cosmic mind a billion light-years in size would only have time to have about ten thoughts in total before dark energy fragmented it into disconnected parts. On the other hand, these few precious thoughts and accompanying experiences might be quite deep! The Indian physicist Subrahmanyan Chandrasekhar famously proved that if you keep adding mass to it until it surpasses the Chandrasekhar limit, about 1.4 times the mass of our Sun, it will undergo a cataclysmic thermonuclear detonation known as a supernova of type 1A. This means that the distance to our neighbor is in the ballpark of 1000…000 meters, where the total number of zeroes could reasonably be 21, 22, 23,…, 100, 101, 102 or more—but probably not much smaller than 21, since we haven’t yet seen compelling evidence of aliens. For our nearest neighbor civilization to be within our Universe, whose radius is about 10^ 26 meters, the number of zeroes can’t exceed 26, and the probability of the number of zeroes falling in the narrow range between 22 and 26 is rather small. This is why I(author) think we’re alone in our Universe.






68 views0 comments

Recent Posts

See All

Comments


Post: Blog2 Post
bottom of page