Last Tuesday, the Nobel Prize in Physics was jointly awarded to Roger Penrose, Andrea Ghez, and Reinhard Genzel. Penrose was awarded half of the prize for his foundational theoretical work proving the existence of black holes. The other half of the prize was shared between Ghez and Genzel for their experimental measurements that found the supermassive black hole at the center of the Milky Way, Sagittarius A*. This year’s Nobel Prize in Physics is the first to be awarded for research involving black holes. As we’ve seen in the past few weeks, black holes have massive implications throughout theoretical physics. Black holes were originally a thought experiment conceived by Albert Einstein in 1916 as a logical consequence of his theory of general relativity. But even Einstein was skeptical that they could actually exist in the physical realm. From there, black holes have had a long journey from mathematical anomaly to proven theory to experimentally observed phenomenon.
The history of black holes actually stretches farther back than Einstein. In the 1700s, Astronomer John Mitchell and polymath Pierre-Simon Laplace each individually predicted the existence of stars so big that their gravity would trap light. These calculations were made based on Newtonian gravity equations and the stars. The Newtonian view of gravity is still used today in some cases, particularly in calculating the gravitational pull exerted on objects on Earth.
Newton’s Theory of Universal Gravitation posited that any two objects exert a gravitational force on each other proportional to their mass and inversely proportional to their separation squared. For most objects on earth, mass is irrelevant (because most objects on earth are significantly smaller than the earth itself). Any object falling on earth experiences the same gravitational pull and the same acceleration, meaning an elephant and a peanut thrown off a cliff simultaneously both hit the bottom at the same time. Separation still plays a part in gravity on earth though. Technically, your weight (a function of your mass multiplied by your acceleration due to gravity—9.8 m/s2 everywhere on earth) is infinitesimally smaller on the top of a mountain compared to sea level.
Newtonian physics conceptualizes gravity as a force that any massive object exerts on any other massive object. But by the early 1900s, it had become clear that this wasn’t the complete picture. Newtonian gravity fails in some instances, particularly in places where gravity is super strong. The first chink in the armor of Universal Gravitation came from observations of Mercury’s orbit around the sun. Like all planets, Mercury orbits the sun elliptically (an oval-shaped orbit). But, unlike other planets that orbit a nearly identical elliptical path dictated by Newtonian gravity, Mercury’s elliptical path precesses, meaning its path changes measurably with every orbit. Universal gravitation fails to predict the extent of Mercury’s orbital precession and for a long time, scientists couldn’t understand why a theory that so accurately predicts gravitational movements in other cases would fail in this one case.
The solution came in 1915 when Einstein presented a new gravitational theory, his theory of general relativity. General relativity posits that, instead of a force exerted by one massive body on another, gravity actually arises from the curving of spacetime around massive objects. The spacetime depression caused by stars or planets is significant enough to cause nearby objects to fall into that local well of spacetime (unless the object is traveling at a speed fast enough to orbit the well). Like marbles circling a drain, moons orbit planets and planets orbit suns because the fabric of spacetime is curved by those heavier objects. In most cases, the predictions of Newtonian gravity and general relativity are equivalent. It’s only when gravity gets particularly strong—either because the source of the gravity is supermassive or because (in Mercury’s case) an object is very close to the source of the gravity—that these theories diverge. As you might expect, the extreme gravity around black holes would be the ideal place to test the boundaries of general relativity.
Einstein and several other physicists in the early 20th century laid a rich theoretical foundation for determining the gravitational impact of a superdense star (singularity). Like Mitchell and Laplace predicted using Newtonian mechanics, such an object would trap even the fastest moving entity in the universe, light. Whereas planets and stars bend spacetime and create wells, singularities puncture spacetime completely and create traps. General relativity predicts a radius, known as the Schwarzschild radius, surrounding the singularity within which the curvature of spacetime is too steep for even light to escape. This curvature creates what we now call the event horizon.
Even as general relativity offered more and more theoretical evidence for the existence of black holes, physicists still couldn’t figure out how a black hole could practically form. It was at this point, in the 1960s and 70s that Roger Penrose introduced a new method of calculating the fates of objects inside and outside black holes, the Penrose diagram. A Penrose diagram takes the form of a diamond in which a vertical axis represents the time coordinate and a horizontal axis represents the space coordinate. The diagonal borders represent the past and future as defined by the speed of light. Light always travels through the Penrose diagram at a 45-degree angle, and anything slower than light moves through spacetime at a lesser angle.
Any object placed on the diagram has a light cone, an hourglass-shaped region that represents all of its possible future locations in spacetime and all of the past events that could possible affect it. This light cone is bounded by light moving at a 45-degree angle from the object in all directions. Anything outside this cone cannot possibly impact or be impacted by the object (because nothing can move faster than light). Using the Penrose diagram, Roger Penrose was able to demonstrate how light cones are distorted to pull objects towards the singularity. Past the event horizon, all future light cones point into the singularity, meaning that all of a star’s remaining matter and light must collapse into the singularity at the moment of black hole formation.
While Penrose was able to theoretically demonstrate how black holes form, it was difficult to experimentally prove the existence of black holes. Very recently, we have been able to get an image of the supermassive black hole at the center of the M87 galaxy through the collection of radio waves coming off the plasma surrounding it. But before that achievement, one of the only ways to observe black holes was by looking at how the light from stars around and behind them becomes distorted by their gravity. Two different teams—led by Andrea Ghez and Reinhard Genzel—observed stars in the center of the Milky Way galaxy for almost 30 years (starting in the 1990s). These observations provided robust evidence for the existence of a supermassive black hole, Sagittarius A*, in the center of our galaxy.
These observations also served to reinforce the predictions made by general relativity. As stars orbit a black hole, general relativity predicts that the light coming off of them will shift from blue to red (because the gravity of the black hole drains energy from the light). In Newtonian gravity, this wouldn’t happen and the light would remain blue as the star travels around the black hole. The first practical observation of the “red-shifting” phenomenon in a star orbiting around a black hole, came from the observations by Ghez’s and Ghnzel’s teams. Nearly 100 years after Einstein first presented the theory, these observations provided concrete evidence of general relativity in action.
Comment below or email me at contact@anyonecanscience.com to let me know what you think about this week’s blog post and tell me what sorts of topics you want me to cover in the future. And subscribe below for weekly science posts sent straight to your email!