לא מפורסם

This year’s Nobel Prizes underscore the transformative impact of the artificial intelligence revolution currently underway, with notable connections to the achievements recognized in physics and chemistry.

The Nobel Prizes are awarded annually on December 10, marking the anniversary of Alfred Nobel’s death. Nobel, a Swedish industrialist who amassed a significant fortune primarily through the invention and development of explosives,  later grew to regret the suffering and destruction his inventions had caused—or enabled. Toward the end of his life he resolved to dedicate his fortune to the benefit of humanity. Upon his death in 1896, it was revealed that he had left most of his fortune to a fund, the interest from which was to be used to award prizes for outstanding discoveries and developments in five categories: three scientific - Medicine or Physiology, Physics, Chemistry - and two non-scientific - Literature and Peace.

Though Alfred Nobel had no direct descendants, his will was contested by distant relatives who received only minor bequests. After a protracted legal dispute, they were ultimately unsuccessful, and In 1901, five years after his death, the Nobel Prizes were awarded for the first time. They quickly gained recognition as the most  important and prestigious scientific awards in their respective fields. In 1969, the tradition was expanded with the introduction of the Prize in Economic Sciences in Memory of Alfred Nobel by the Swedish central bank, which became part of the ceremony and associated events, although not funded through Nobel's original endowment.

Nobel wished to be remembered as someone who contributed to the benefit of humanity, not as a harbinger of death. Alfred Nobel's will | Photo: Prolineserver, Wikipedia, Public Domain

 

Over the years, the Nobel Prizes have faced significant criticism, some of it justified. A major point of contention is the exclusion of deserving scientists - particularly female scientists - who were overlooked for the prizes. Another criticism is the exclusion of organizations or institutions from most categories. Only the Nobel Peace Prize allows for organizational recipients, whereas scientific prizes are restricted to individuals. This policy disregards the reality that many groundbreaking discoveries result from collaborations involving dozens, hundreds, or even thousands of researchers, as exemplified by large-scale international projects like CERN or LIGO.

Additionally, the prize committees have diverged from two stipulations in Nobel’s will: awarding the prize to a single individual per field and for work completed in the past year. Today, prizes are often shared by two or three laureates per field and recognize achievements that have stood the test of time, often decades, until their importance has been unequivocally proven.

The scope of scientific fields defined in Nobel’s will—Medicine or Physiology, Physics, and Chemistry—also presents challenges more than 120 years later. The absence of prizes for Mathematics or Computer Science is particularly notable, given the critical roles these disciplines now play—roles that Nobel likely could not have anticipated. Occasionally, the prize committees have stretched the definitions to accommodate contemporary advancements, as they did this year by awarding the Physics Prize to two pioneers of artificial intelligence, even though the connection of their work to traditional physics is rather tenuous. John Hopfield of Princeton University, USA, and Geoffrey Hinton of the University of Toronto, Canada, were recognized for their development of computational tools that simulate the activity of the nervous system,  laying the foundational work for modern artificial intelligence.

They laid the groundwork for modern artificial intelligence by developing systems that simulate the activity of neural networks in the brain. Geoffrey Hinton (right) and John Hopfield | Photos: Ramsey Cardy, bhadeshia123, Wikipedia

 

The Computer and the Brain

The first modern computers, developed in the 1940s, were partially based on an abstract model of the human brain, where neurons represent logical gates, and their interconnected networks enable computational operations and memory storage. In 1949, psychologist Donald O. Hebb proposed a theory that basic learning occurs by altering the strength of connections between neurons. According to Hebbian Theory, neuronal connections can be reshaped through experience. Specifically, when one neuron consistently activates another through electrical or chemical signals, the connection between them strengthens. Conversely, when neurons interact infrequently, their connections weaken.

These ideas spurred the development of artificial neural networks designed to emulate the brain's functions and learning mechanisms. In artificial neural networks, computer components simulate neurons, where nodes are assigned weights that represent the strength of connections. Adjustments to these weights in response to external stimuli enable the network to learn. Such networks are adept at performing complex tasks, such as facial recognition, tumor diagnosis, and differentiating between Persian and Angora cats—tasks that are difficult to define through explicit programming. Unlike traditional software, artificial neural networks learn from examples rather than relying on predefined rules.

Early attempts to develop neural networks were largely unsuccessful, causing interest in the field to wane. However, breakthroughs in the 1980s led to a resurgence in research, resulting in significant progress.

Each "neuron" connects to all the neurons in the layer above and below it. Schematic diagram of an artificial neural network | Illustration: THOM LEACH / SCIENCE PHOTO LIBRARY

John Hopfield, born in 1933, earned a Ph.D. in physics from Cornell University,  worked at Bell Laboratories and later became a researcher at Princeton University. In 1980, he was appointed as a professor of chemistry and biology at the California Institute of Technology (Caltech), where he sought to combine physics with biological systems and explore the development of computer-based neural networks.  Inspired by magnetic systems in which adjacent components influence each other, Hopfield developed an artificial neural network in which all neurons are interconnected, unlike traditional networks where layers of neurons are connected sequentially.  Each connection between neurons was assigned a specific energy value, and the total energy of the system was calculated by summing all the connections.

In this network, called a "Hopfield Network," it is possible to input an image and then adjust the weights of the connections between neurons to achieve a minimum energy state. When a new image is input into the network, the values of the neurons are altered to reach a new minimum energy state, and so on. This iterative process allows the network to "remember" and reproduce the original image or images it was trained to recall and retrieve information, images, or text based on similar details, akin to associative memory. The Hopfield Network, developed in 1982, was one of the first significant successes in the neural networks field.

Geoffrey Hinton developed an enhancement to Hopfield's network called the "Boltzmann Machine." Born in London in 1947, Hinton studied psychology and earned a Ph.D. in computing. However, he struggled to secure funding for neural network research in the UK, which led him to move to the United States and later to the University of Toronto in Canada. Hinton applied statistical physics, particularly Boltzmann's distribution, which allows for the calculation of the probability of a specific molecule in a large system—such as a trillion gas molecules—having a certain velocity based on the system's volume, temperature, and pressure.

In 1986, Hinton introduced his artificial neural network, which allowed images to be input and consequently adjust the connection strengths between the "neurons." After sufficient iterations, the network reached a state where, even as certain connections strengthen or weaken, the overall properties of the machine, analogous to the general properties of a gas system, remained at equilibrium. From this state, it could generate a new image, different from the ones it was trained on but in a similar style.  The Boltzmann Machine is an early example of generative artificial intelligence. While inefficient and requiring long computation times, it laid the groundwork for modern image and text generation models.

Each neuron in the artificial neural network is connected to all other neurons, allowing the energy of all connections to be calculated. Schematic diagram of a Boltzmann Machine | Illustration: Gossamer, Wikipedia, Public Domain

 

Intelligence and Proteins

If this year’s Physics Prize honors the application of physical principles to the development of artificial neural networks, which led to modern AI tools, the Chemistry Prize celebrates researchers who harnessed these technologies for groundbreaking scientific advances—in this case, deciphering the three-dimensional structure of proteins and designing artificial proteins.

Proteins are molecules made of long chains of amino acids, performing a vast array of functions in living organisms. Much of our body’s cells are built from proteins, which also produce the proteins essential for life’s processes. We couldn’t breathe without hemoglobin proteins transporting oxygen to cells, digest food without digestive enzymes (which are proteins), fight infections without antibodies (protein molecules), or grow and reproduce without protein enzymes that replicate our genetic material.

The immense diversity of proteins in nature stems from different combinations of just 20 amino acids. Each protein is a long chain of hundreds or thousands of amino acids arranged in a specific sequence dictated by genes. After the amino acids are strung together like beads, the chain folds into a complex three-dimensional structure. In this folded state, beads that were distant along the chain suddenly find themselves close together and must align in terms of electrical charge, solubility, spatial configuration, and more. These interactions influence not only the structure but also the function of specific protein regions. Thus, the correct 3D structure is crucial for a protein’s function—misfolded proteins usually malfunction and are sent for recycling.

Understanding a protein’s three-dimensional structure is vital for science, medicine, industry, and other fields. For example, to develop a drug that neutralizes an enzyme, scientists must understand the enzyme’s structure and function to design a molecule that efficiently binds to it—or outcompetes it for binding with certain receptors.

Determining the 3D structure of proteins has traditionally been a complex task. The primary method was X-ray crystallography, which involves crystallizing the protein, exposing it to X-rays, and analyzing the diffraction patterns with electron microscopy. Determining the structure of a single protein using these methods often took years of work, sophisticated and expensive equipment, and significant luck—not all proteins could be crystallized.

Today, identifying a protein’s amino acid sequence is relatively straightforward, but deducing its 3D structure from the sequence alone has proven highly challenging due to the immense variety of possible sequence combinations and foldings. Over time, a competitive race emerged among academic labs and private companies to develop methods for predicting a protein’s 3D structure from its amino acid sequence, primarily using specialized software.

In the 1990s, Google initiated a biannual competition to encourage teams to predict the structures of proteins whose 3D conformations were unknown. Experimental methods were used in parallel to determine the actual structures, allowing comparisons between predicted and real structures. Early software achieved only about 40% accuracy, but by the late 2010s, a new private contender transformed the field.

DeepMind’s AlphaFold entered the competition in 2018, achieving 60% accuracy. By 2020, its predictions exceeded 90% accuracy! Founded in 2010 by Demis Hassabis, a British computer scientist with a PhD in neuroscience, DeepMind originally aimed to develop AI for complex games like chess. However, it later pivoted into protein structure prediction, thanks in part to John Jumper, a computer scientist with a PhD who joined DeepMind to lead the AlphaFold team.

AlphaFold leverages the vast repository of protein structures accumulated over nearly a century, using it to predict how a given protein folds into its 3D shape. In its first stage, AlphaFold’s algorithm compares the target protein’s sequence to those of other proteins. It then analyzes additional parameters, such as conserved regions in the sequence that have remained unchanged throughout evolution. This information helps generate a 2D representation of the protein, which is then compared against a database of over 180,000 known protein structures. Through iterative refinement, AlphaFold achieves remarkable accuracy.

For this groundbreaking advancement, Hassabis and Jumper share half of this year’s Chemistry Prize.


Significant success in deciphering the 3D structures of proteins based on their amino acid sequences. Illustration of AlphaFold’s 3D structural prediction method | From the research paper by Jumper et al., Wikipedia, CC BY 4.0

The other half of the prize goes to David Baker of the University of Washington. Born in 1962, Baker developed an early software program called Rosetta in the 1990s to predict protein structures. He and his colleagues realized that the software could also be used in reverse: by inputting a desired 3D protein structure, the software could estimate the amino acid sequence required to produce it. In 2003, this approach successfully predicted the composition of a protein with a human-designed structure. Using crystallography, researchers confirmed that the synthetic protein’s structure matched the one designed in the software.

This breakthrough gave rise to a new field of designing artificial proteins. It allows for the creation of entirely novel proteins with predefined properties, such as proteins that bind to opioid molecules, molecular motor proteins, vaccine-related proteins, or specialized enzymes that synthesize new molecules.

Following AlphaFold’s success, Baker recognized AI’s transformative potential and integrated a similar model into Rosetta, significantly enhancing its capacity for designing new proteins. In 2008, Baker received the Sackler Prize from Tel Aviv University for his contributions to the field.


From deciphering the structure of natural proteins to designing artificial ones: John Jumper (right), Demis Hassabis, and David Baker | Photos: TWIS, National Academies - Earth and Life Studies, Duncan.Hull, Wikipedia

A Small Molecule with a Big Impact

The Nobel Prize in Medicine is the only scientific Nobel this year unrelated to artificial intelligence, but it is closely tied to protein production. It is awarded to Gary Ruvkun and Victor Ambros for discovering microRNA and its role in regulating gene expression.

Proteins, as mentioned earlier, are made up of sequences of amino acids. How does the ribosome—the machine that assembles proteins—know the correct sequence? It follows the DNA sequence in the cell nucleus. Much of our genetic material consists of instructions for protein production. When a cell needs to produce a specific protein, it creates a "working copy" of the DNA in a similar material called RNA. This copy, known as messenger RNA (mRNA), exits the cell nucleus, binds to the ribosome, and serves as a template for protein synthesis.

Almost every cell in the body contains a full copy of our DNA, which includes instructions for building all proteins. However, not every cell produces all proteins. Muscle cells produce proteins for contractile fibers, retinal cells produce proteins that change shape in response to light, and nerve cells produce proteins enabling the transmission of electrical signals. How does a cell "know" it should be a muscle cell, for example, and produce proteins accordingly? Ambros and Ruvkun uncovered one mechanism ensuring each cell produces only the proteins it needs.

In the 1960s, researchers discovered special proteins called transcription factors that regulate the production of other proteins in a given cell. These proteins bind to specific DNA regions and influence whether a gene is transcribed into mRNA, ultimately determining whether the cell produces a particular protein. For many years, transcription factors were believed to be the primary regulators of protein production in cells.

In the late 1980s, Ambros and Ruvkun were conducting postdoctoral research at the Massachusetts Institute of Technology (MIT) in the lab of Robert Horvitz, who would later receive a Nobel Prize in Medicine. Ruvkun, born in California in 1952, came to MIT after earning his PhD at Harvard University. Ambros, born in New Hampshire in 1953, stayed at MIT after completing his PhD there in 1979. They studied embryonic development in the tiny roundworm C. elegans, a model organism widely used in genetics research. Each focused on a different gene influencing developmental timing in the worm. Ultimately, they discovered that the two genes, lin-4 and lin-14, produce complementary RNA molecules that can bind to each other. This binding prevents the ribosome from reading lin-14 RNA and producing its protein, and may also accelerate the degradation of lin-14 mRNA.


Worked together and in parallel to discover small RNA molecules that regulate protein production. Gary Ruvkun (right) and Victor Ambros | Photos: Rosalindclee, Adam Fagen, Wikipedia

This discovery revealed a new type of RNA—microRNA. Unlike other RNA molecules, microRNAs do not carry instructions for making proteins. Instead, they regulate the production of other proteins. Unlike protein transcription factors, microRNAs act after the mRNA of other genes has already been made, preventing protein synthesis from occurring. This allows the cell to respond quickly when needed, stopping protein production even after the mRNA has been transcribed. For example, if body temperature rises, cells may produce proteins that protect DNA from heat damage. However, when the temperature drops, they must rapidly halt production to conserve resources. Ambros and Ruvkun published their findings in two landmark papers in 1993.

Initially, many believed this mechanism was unique to the worms they studied. However, in 2000, Ruvkun and colleagues discovered another microRNA, let-7, present in the genomes of many animals. This hinted at the widespread nature of microRNA. Within a few years, hundreds of microRNA genes were identified. Today, this mechanism is known to exist in all multicellular organisms, including plants and fungi, and is considered essential for their development.

The study of micro-RNA has taught us much about the regulation of protein production in cells and allows us to investigate what happens when this mechanism malfunctions. Abnormal expression of micro-RNA can lead to various diseases, including cancer, skeletal abnormalities and problems with various organs, and more. If a deficiency or excess of micro-RNA can cause diseases, it may be possible to develop treatments based on these small molecules.

Among the many awards they received, in 2014, Ruvkun and Ambros were awarded the Wolf Prize alongside former Israeli researcher Nahum Sonenberg. Three years earlier, Ruvkun also received the Dan David Prize, awarded at the Tel Aviv University, along with biologist Cynthia Kenyon.

Unfortunately, it must be noted that this year’s science Nobel Prizes did not include any female laureates, after two consecutive years in which women received this prestigious recognition.


The molecule demonstrating how widespread microRNA is in multicellular organisms. MicroRNA let-7, discovered in 2000 | Illustration: CAROL AND MIKE WERNER / SCIENCE PHOTO LIBRARY

Gaps and Traumas

The Nobel Memorial Prize in Economic Sciences for 2024 is awarded to three researchers from the United States: Daron Acemoglu, Simon Johnson, and James A. Robinson. Their work examines the causes of economic disparities between societies and highlights the critical role of societal institutions in fostering national prosperity.

The 2024 Nobel Prize in Literature was awarded to South Korean author and poet Han Kang. She has published 13 works of prose and poetry, addressing themes of personal and national crises and traumas. Her works include The Vegetarian, which explores the life of a dutiful wife who decides to stop eating meat and even cooking it for her husband. The prize committee noted that Han, born in 1970, receives the award “for her intense poetic prose that confronts historical traumas and exposes the fragility of human life”


 A hibakusha—the Japanese term for survivors of the atomic bombings—shares their story with young people as part of activities by Nihon Hidankyō in 2007 | Photo: Buroll, Wikipedia, Public Domain

The Nobel Peace Prize is also linked to confronting traumas this year, awarded to the Japanese organization Nihon Hidankyō, an abbreviation of its full name: "The Confederation of A- and H-Bomb Sufferers Organizations." Established in 1956, Nihon Hidankyō represents survivors of the atomic bombings of Hiroshima and Nagasaki in August 1945, which marked the end of World War II. It also advocates for victims of U.S. nuclear testing in the Bikini Atoll during the 1950s.

The prize committee announced that the organization “is receiving the Nobel Peace Prize for 2024 for its efforts to achieve a world free of nuclear weapons and for demonstrating through witness testimony that nuclear weapons must never be used again”