Physicists at MIT have developed a new way to probe inside an atom’s nucleus, using the atom’s own electrons as “messengers” within a molecule.In a study appearing today in the journal Science, the physicists precisely measured the energy of electrons whizzing around a radium atom that had been paired with a fluoride atom to make a molecule of radium monofluoride. They used the environments within molecules as a sort of microscopic particle collider, which contained the radium atom’s electrons a
Physicists at MIT have developed a new way to probe inside an atom’s nucleus, using the atom’s own electrons as “messengers” within a molecule.
In a study appearing today in the journal Science, the physicists precisely measured the energy of electrons whizzing around a radium atom that had been paired with a fluoride atom to make a molecule of radium monofluoride. They used the environments within molecules as a sort of microscopic particle collider, which contained the radium atom’s electrons and encouraged them to briefly penetrate the atom’s nucleus.
Typically, experiments to probe the inside of atomic nuclei involve massive, kilometers-long facilities that accelerate beams of electrons to speeds fast enough to collide with and break apart nuclei. The team’s new molecule-based method offers a table-top alternative to directly probe the inside of an atom’s nucleus.
Within molecules of radium monofluoride, the team measured the energies of a radium atom’s electrons as they pinged around inside the molecule. They discerned a slight energy shift and determined that electrons must have briefly penetrated the radium atom’s nucleus and interacted with its contents. As the electrons winged back out, they retained this energy shift, providing a nuclear “message” that could be analyzed to sense the internal structure of the atom’s nucleus.
The team’s method offers a new way to measure the nuclear “magnetic distribution.” In a nucleus, each proton and neutron acts like a small magnet, and they align differently depending on how the nucleus’ protons and neutrons are spread out. The team plans to apply their method to precisely map this property of the radium nucleus for the first time. What they find could help to answer one of the biggest mysteries in cosmology: Why do we see much more matter than antimatter in the universe?
“Our results lay the groundwork for subsequent studies aiming to measure violations of fundamental symmetries at the nuclear level,” says study co-author Ronald Fernando Garcia Ruiz, who is the Thomas A. Franck Associate Professor of Physics at MIT. “This could provide answers to some of the most pressing questions in modern physics.”
The study’s MIT co-authors include Shane Wilkins, Silviu-Marian Udrescu, and Alex Brinson, along with collaborators from multiple institutions including the Collinear Resonance Ionization Spectroscopy Experiment (CRIS) at CERN in Switzerland, where the experiments were performed.
Molecular trap
According to scientists’ best understanding, there must have been almost equal amounts of matter and antimatter when the universe first came into existence. However, the overwhelming majority of what scientists can measure and observe in the universe is made from matter, whose building blocks are the protons and neutrons within atomic nuclei.
This observation is in stark contrast to what our best theory of nature, the Standard Model, predicts, and it is thought that additional sources of fundamental symmetry violation are required to explain the almost complete absence of antimatter in our universe. Such violations could be seen within the nuclei of certain atoms such as radium.
Unlike most atomic nuclei, which are spherical in shape, the radium atom’s nucleus has a more asymmetrical configuration, similar to a pear. Scientists predict that this pear shape could significantly enhance their ability to sense the violation of fundamental symmetries, to the extent that they may be potentially observable.
“The radium nucleus is predicted to be an amplifier of this symmetry breaking, because its nucleus is asymmetric in charge and mass, which is quite unusual,” says Garcia Ruiz, whose group has focused on developing methods to probe radium nuclei for signs of fundamental symmetry violation.
Peering inside the nucleus of a radium atom to investigate fundamental symmetries is an incredibly tricky exercise.
“Radium is naturally radioactive, with a short lifetime and we can currently only produce radium monofluoride molecules in tiny quantities,” says study lead author Shane Wilkins, a former postdoc at MIT. “We therefore need incredibly sensitive techniques to be able measure them.”
The team realized that by placing a radium atom in a molecule, they could contain and amplify the behavior of its electrons.
“When you put this radioactive atom inside of a molecule, the internal electric field that its electrons experience is orders of magnitude larger compared to the fields we can produce and apply in a lab,” explains Silviu-Marian Udrescu PhD ’24, a study co-author. “In a way, the molecule acts like a giant particle collider and gives us a better chance to probe the radium’s nucleus.”
Energy shift
In their new study, the team first paired radium atoms with fluoride atoms to create molecules of radium monofluoride. They found that in this molecule, the radium atom’s electrons were effectively squeezed, increasing the chance for electrons to interact with and briefly penetrate the radium nucleus.
The team then trapped and cooled the molecules and sent them through a system of vacuum chambers, into which they also sent lasers, which interacted with the molecules. In this way the researchers were able to precisely measure the energies of electrons inside each molecule.
When they tallied the energies, they found that the electrons appeared to have a slightly different energy compared to what physicists expect if they did not penetrate the nucleus. Although this energy shift was small — just a millionth of the energy of the laser photon used to excite the molecules — it gave unambiguous evidence of the molecules’ electrons interacting with the protons and neutrons inside the radium nucleus.
“There are many experiments measuring interactions between nuclei and electrons outside the nucleus, and we know what those interactions look like,” Wilkins explains. “When we went to measure these electron energies very precisely, it didn’t quite add up to what we expected assuming they interacted only outside of the nucleus. That told us the difference must be due to electron interactions inside the nucleus.”
“We now have proof that we can sample inside the nucleus,” Garcia Ruiz says. “It’s like being able to measure a battery’s electric field. People can measure its field outside, but to measure inside the battery is far more challenging. And that’s what we can do now.”
Going forward, the team plans to apply the new technique to map the distribution of forces inside the nucleus. Their experiments have so far involved radium nuclei that sit in random orientations inside each molecule at high temperature. Garcia Ruiz and his collaborators would like to be able to cool these molecules and control the orientations of their pear-shaped nuclei such that they can precisely map their contents and hunt for the violation of fundamental symmetries.
“Radium-containing molecules are predicted to be exceptionally sensitive systems in which to search for violations of the fundamental symmetries of nature,” Garcia Ruiz says. “We now have a way to carry out that search.”
This research was supported, in part, by the U.S. Department of Energy.
This image depicts the radium atom’s pear-shaped nucleus of protons and neutrons in the center, surrounded by a cloud of electrons (yellow), and an electron (yellow ball with arrow) that has a probability to be inside the nucleus. In the background is the spherical nucleus of a fluoride atom, which joins to form the overall molecule of radium monofluoride.
Back and better than ever, the Cambridge Science Carnival, an annual free family-friendly science extravaganza, was held on Sunday, Sept. 21, at the Kendall/MIT Open Space.Founded by the MIT Museum in 2007, and organized with the support of MIT and the City of Cambridge, the 2025 event drew approximately 20,000 attendees and featured more than 140 activities, demonstrations, and installations tied to the topics of science, technology, engineering, arts, and mathematics (STEAM).Among the carnival
Back and better than ever, the Cambridge Science Carnival, an annual free family-friendly science extravaganza, was held on Sunday, Sept. 21, at the Kendall/MIT Open Space.
Founded by the MIT Museum in 2007, and organized with the support of MIT and the City of Cambridge, the 2025 event drew approximately 20,000 attendees and featured more than 140 activities, demonstrations, and installations tied to the topics of science, technology, engineering, arts, and mathematics (STEAM).
Among the carnival’s wide variety of activities was the popular robot petting zoo, an annual showcase involving more than a dozen companies and local robotics clubs, including FIRST Tech Challenge and FIRST Robotics Competition. Participants were invited to engage with a range of different robots, from building with LEGOs and erector sets to piloting underwater robots to learning about the science of automation.
“Every exhibit and every moment of discovery today reinforces why Cambridge remains a global leader in STEAM,” Cambridge Mayor Denise Simmons said in her remarks at the event. “The creativity, ingenuity, and joy on display here today are a powerful reminder that science isn’t just for labs and lecture halls — it’s for everyone.”
Other activities included an appearance from the popular kid-friendly podcast “Tumble Science,” with co-host Marshall Escamilla testing fans’ knowledge of different STEAM topics drawn from “Tumble Science.” Clark University’s smoke-ring air cannons were a particular hit with the under-7-year-old set, while “Cycle To Science” showed off a gravity-defying bicycle wheel that, while spinning, was suspended on one side by a simple piece of string. Attendees also enjoyed live music, food trucks, and activities exploring everything from pipette art to the chemistry of glass.
At the robot petting zoo, FIRST Robotics volunteer mentor Dominique Regli reflected on the event as someone who was herself first inspired by similar festivals more than a decade earlier.
“Seeing kids of all ages interact with the robots made me think back to when I was a seventh grader, and how getting to see some of these robots for the first time was truly life-changing for me,” said Regli, who has been involved with FIRST Robotics since 2018 and is now an MIT computer science PhD student and affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL). “These types of events are so important to expose students to what's possible.”
Throughout its history, a key aspect of the carnival has been MIT’s close collaboration with the City of Cambridge, which ran several activities. Cambridge Public School teachers led and the Public Works Department hosted a “Trash or Treasure” activity, which helped teach kids about recycling and composting. The carnival is a major contribution to the Institute’s objective of connecting the MIT ecosystem with Cambridge residents and local communities.
“Cambridge is one of the world’s leading science cities, with more Nobel laureates per capita than any other city on the planet,” says Michael John Gorman, director of the MIT Museum. “The Cambridge Science Carnival is a beloved day in the Cambridge calendar which brings science out of the labs and onto the streets.”
With a focus on engaging families and kids ranging from kindergarten to the eighth grade, one important outcome this year was to give undergraduate and graduate students the opportunity to showcase their work and hone their skills in clearly communicating science concepts to the public. There were over 50 activities led by MIT students, as well as participants from other local schools such as Boston College and Boston, Clark, Harvard, Northeastern, and Tufts universities.
Typically organized as part of the annual Cambridge Science Festival, this year the Cambridge Science Carnival returned as a standalone event while the larger festival undergoes a strategic transition for its relaunch in 2026. The MIT Museum offered free admission during the carnival and is always free to Cambridge residents, as well as active military, EBT cardholders, members of the Massachusetts Teachers Association, and MIT ID holders.
“For MIT researchers, discovery often happens in a lab or a classroom, but the truth is, the spark of discovery can happen anywhere,” said Alfred Ironside, MIT vice president for communications, in remarks at the event. “That’s really what today is about: feeding curiosity, encouraging questions, and showing that science is not locked away behind closed doors. It’s for everyone.”
A doctoral student has developed a text message-based system that regularly updates both long-term hospital patients’ and care facilities’ availability statuses, smoothing a normally time-consuming placement process.
A doctoral student has developed a text message-based system that regularly updates both long-term hospital patients’ and care facilities’ availability statuses, smoothing a normally time-consuming placement process.
The statistics for the University of Cambridge are available on our website as part of our ongoing commitment to transparency and openness around the use of animals in research.
This coincides with the publication of the Home Office report on the statistics of scientific procedures on living animals in Great Britain in 2024.
The 10 organisations are listed below alongside the total number of procedures they carried out on animals for scientific research in Great Britain in 2024. Of these 1,379
The statistics for the University of Cambridge are available on our website as part of our ongoing commitment to transparency and openness around the use of animals in research.
This coincides with the publication of the Home Office report on the statistics of scientific procedures on living animals in Great Britain in 2024.
The 10 organisations are listed below alongside the total number of procedures they carried out on animals for scientific research in Great Britain in 2024. Of these 1,379,399 procedures, more than 99% were carried out on mice, fish, rats, and birds and 82% were classified as causing pain equivalent to, or less than, an injection.
This is the tenth consecutive year that organisations have come together to publicise their collective statistics and examples of their research.
Organisation
Number of Procedures (2024)
The Francis Crick Institute
200,055
University of Oxford
199,730
University of Cambridge
190,448
UCL
175,687
Medical Research Council
140,602
University of Edinburgh
136,862
King's College London
106,300
University of Glasgow
99,509
University of Manchester
81,252
Imperial College London
48,954
TOTAL
1,379,399
In total, 72 organisations have voluntarily published their 2024 animal research statistics.
All organisations are committed to the ethical framework called the ‘3Rs’ of replacement, reduction and refinement. This means avoiding or replacing the use of animals where possible, minimising the number of animals used per experiment and optimising the experience of the animals to improve animal welfare. However, as institutions expand and conduct more research, the total number of animals used can rise even if fewer animals are used per study.
All organisations listed are signatories to the Concordat on Openness on Animal Research in the UK, which commits them to being more open about the use of animals in scientific, medical and veterinary research in the UK. More than 130 organisations have signed the Concordat, including UK universities, medical research charities, research funders, learned societies and commercial research organisations.
Wendy Jarrett, Chief Executive of Understanding Animal Research, which developed the Concordat on Openness, said: “Animal research remains a small but vital part of the quest for new medicines, vaccines and treatments for humans and animals. Alternative methods are increasingly being phased in, but, until we have sufficient reliable alternatives available, it is important that organisations that use animals in research maintain the public’s trust in them. By providing this level of information about the numbers of animals used, and the experience of those animals, as well as details of the medical breakthroughs that derive from this research, these Concordat signatories are helping the public to make up their own minds about how they feel about the use of animals in scientific research in Great Britain.”
Professor Jon Simons, Head of the School of Biological Sciences at the University of Cambridge, said: “Animal research remains critical for understanding complex biological systems and is an essential step in the development of new medicines, vaccines and treatments for both humans and animals. We are committed to continuing to reduce the number of animals used in biomedical research, and our scientists are actively working on new methods and techniques that will provide robust scientific alternatives.”
CASE STUDY: Mice are vital in the search for effective new dementia treatments
Cambridge researchers are leading drug discovery to develop safer, more effective treatments for the millions of people affected by Alzheimer’s and other neurodegenerative diseases.
“Dementia has often been viewed as something that happens normally as people age, but it's not. It's a disease that we need to treat, so that people can live well and stay independent in later life,” said David Harrison. “But many pharmaceutical companies have lost confidence in working in this area because the risk of failure is too great.”
With expertise in drug discovery, Harrison’s team at Cambridge’s ALBORADA Drug Discovery Institute is designing and making chemical molecules - the basis of future drugs - and testing whether they work on novel targets in the body. The aim is to develop these ideas to the point where pharmaceutical partners can more confidently take things forward.
While the team routinely uses test-tube and computer-based models, animal models are vital in understanding how the many different cell types in the brain interact together in disease.
They’re also vital in understanding how potential drugs are metabolised and distributed throughout the body, and in looking for any adverse effects that may occur in other tissues.
Harrison said: “Almost one million people are estimated to be living with dementia in the UK. We need to find better treatment options. The animals we use are an essential part of the drug discovery process - they could help us change people’s lives.”
The 10 organisations in Great Britain that carry out the highest number of animal procedures - those used in medical, veterinary and scientific research - have released their annual statistics today.
Animal research... is an essential step in the development of new medicines, vaccines and treatments for both humans and animals.
A study led by researchers at the University of Cambridge found that impaired movement of cerebrospinal fluid (CSF) – the clear liquid that cushions and cleans the brain – predicted risk of dementia later in life among 40,000 adults recruited to UK Biobank. Their findings are published today in Alzheimer's & Dementia: The Journal of the Alzheimer's Association and are being presented at the World Stroke Congress 2025 in Barcelona.
In the healthy brain, the so-called glymphatic system serves
A study led by researchers at the University of Cambridge found that impaired movement of cerebrospinal fluid (CSF) – the clear liquid that cushions and cleans the brain – predicted risk of dementia later in life among 40,000 adults recruited to UK Biobank. Their findings are published today in Alzheimer's & Dementia: The Journal of the Alzheimer's Association and are being presented at the World Stroke Congress 2025 in Barcelona.
In the healthy brain, the so-called glymphatic system serves to clear out toxins and waste materials, keeping the brain healthy. Only discovered as recently as 2012, this system functions by flushing CSF through the brain along tiny channels around blood vessels known as perivascular spaces. It collects waste then drains out of the brain, helping keep it clean and healthy.
The glymphatic system is thought to be important in protecting against many of the common forms of dementia, which are often characterised by the build-up of toxic substances in the brain – for example, Alzheimer's disease sees amyloid ‘plaques’ and tau ‘tangles accumulate in brain tissue.
One of the most common forms of dementia is vascular dementia, caused by reduced blood flow to the brain. The most common cause of this type of dementia is cerebral small vessel disease, which affects the small blood vessels in the brain. But the impact of cerebral small vessel disease is even greater because it also interacts with other dementias making them worse; for example, a study of nuns in the US found that among those nuns whose brains showed signs of Alzheimer's disease post mortem, only around a half exhibited symptoms of dementia – but this increased to around nine in 10 if they also had cerebral small vessel disease.
Professor Hugh Markus and colleagues at the University of Cambridge wanted to see whether cerebral small vessel disease and other cardiovascular risk factors damage the glymphatic system – and whether this in turn increases the risk of dementia.
Until recently, it has only been possible to study glymphatic function in mice, but recent advances in MRI scanning have made it possible to study it indirectly in humans. Even so, it was only possible to do this practically in relatively small numbers, but Yutong Chen, while a medical student at the University of Cambridge, developed machine learning algorithms capable of assessing glymphatic functions from MRI scans at scale.
The team applied the algorithm to MRI scans taken from around 40,000 adults in UK Biobank. They found three biomarkers – biological signatures – associated with impaired glymphatic function assessed at baseline, predicted the risk of dementia occurring over the subsequent decade. One of these was DTI-ALPS, a measure of the diffusion of water molecules along the perivascular spaces. Another was the size of the choroid plexus, where the CSF is produced. The third measure reflected the flow velocity of CSF into the brain.
Yutong Chen, from the Department of Clinical Neurosciences at Cambridge, said: “Although we have to be cautious about indirect markers, our work provides good evidence in a very large cohort that disruption of the glymphatic system plays a role in dementia. This is exciting because it allows to ask: how can we improve this?”
Further analysis showed that several cardiovascular risk factors impaired glymphatic function – and hence increased dementia risk, and that this was partly via causing cerebral small vessel disease, which is visible in the MRI scans.
First author Hui Hong, now a radiologist at the Second Affiliated Hospital of Zhejiang University, Hangzhou, China, said: “We already have evidence that small vessel disease in the brain accelerates diseases like Alzheimer's, and now we have a likely explanation why. Disruption to the glymphatic system is likely to impair our ability to clear the brain of the amyloid and tau that causes Alzheimer's disease.”
The research suggests possible approaches for reducing dementia risk. One is to look at strategies for improving glymphatic function. Sleep plays an important role in glymphatic function, and so disrupted sleep patterns are likely to impair its ability to clear toxins. Alternatively, there may be existing medicines that could be repurposed, or new ones that could be developed, to improve glymphatic function.
Another possible approach is to treat vascular risk factors such as high blood pressure. This is supported by recent studies: the SPRINT MIND trial, for example, showed that intensive blood pressure control (maintaining a systolic blood pressure of less than 120 mm Hg) led to a 20% reduction in cognitive decline or dementia compared to participants in the standard treatment group.
Professor Markus, who leads the Stroke Research Group at the University of Cambridge and is a Fellow of Clare Hall, Cambridge, said: “We already know the importance of cardiovascular risk factors when it comes to dementia, and our findings further emphasise this link.
“At least a quarter of all dementia risk is accounted for by common risk factors like blood pressure and smoking. If these impair glymphatic function, then we can intervene. Treating high blood pressure or encouraging people to stop smoking would be an achievable way to helping the glymphatic system work better.”
Professor Bryan Williams, Chief Scientific and Medical Officer at the British Heart Foundation, said: “This study offers us a fascinating glimpse into how problems with the brain's waste clearance system could be quietly increasing the chances of developing dementia later in life. By improving our understanding of the glymphatic system, this study opens exciting new avenues for research to treat and prevent dementia. It also emphasises the importance of managing known cardiovascular risk factors, such as high blood pressure, for reducing dementia risk.”
The research was funded by the British Heart Foundation, with additional support from the National Institute for Health and Care Research Cambridge Biomedical Research Centre.
Problems with the brain’s waste clearance system could underlie many cases of dementia and help explain why poor sleep patterns and cardiovascular risk factors such as high blood pressure increase the risk of dementia.
Treating high blood pressure or encouraging people to stop smoking would be an achievable way to helping the glymphatic system work better
Campus & Community
Class of 2029 yield tops 83%, with international students at 90%
A Harvard gate.Stephanie Mitchell/Harvard Staff Photographer
October 23, 2025
3 min read
Nearly half will pay no tuition
Harvard College on Thursday released data and demographics for the Class of 2029. This is the first class of undergraduates admitted since the Faculty of Arts and Sciences (FAS) reinstated standar
Class of 2029 yield tops 83%, with international students at 90%
A Harvard gate.
Stephanie Mitchell/Harvard Staff Photographer
3 min read
Nearly half will pay no tuition
Harvard College on Thursday released data and demographics for the Class of 2029. This is the first class of undergraduates admitted since the Faculty of Arts and Sciences (FAS) reinstated standardized test scores as a required part of the application process.
The College offered admission to 2,003 applicants, with 1,675 accepting, for a yield rate of 83.6 percent. This is the College’s fifth consecutive admissions cycle with a yield above 83 percent. Harvard received 47,893 applications, a 10 percent increase from the most recent admissions cycle (Class of 2023) for which it required applicants to submit standardized test scores.
“The Class of 2029 reflects everything that makes Harvard College extraordinary,” said Edgerley Family Dean of the Faculty of Arts and Sciences Hopi Hoekstra. “These remarkable students come to us from across the country and around the globe with boundless curiosity and eagerness to join a community that challenges them to learn deeply, listen generously, and expand their understanding of themselves and the world around them. Even amid shifting economic realities, our commitment to access and opportunity remains unwavering. That nearly half of this class will attend Harvard tuition-free fills me with immense pride and optimism for the future they will help shape.”
The class is made up of students from 92 countries and all 50 states. International students make up 15 percent of the class. The yield rate for international students was 90.3 percent, with only eight choosing to defer their admission.
“The Class of 2029 was drawn from big cities and small towns, suburbs and farms, and from nations around the world, and all students, no matter where they’re from, where they went to high school, or what their personal circumstances might be, were admitted to Harvard because they share the extraordinary potential to change the world,” said William Fitzsimmons, dean of admissions and financial aid. “Amid several seismic shifts in higher education admissions over the past few years, as well as the effects of COVID, the Class of 2029 enters Harvard as worthy successors to the generations of students who’ve come before them.”
Of students in the class who self-identified their race, 11.5 percent identified as African American or Black, 41 percent identified as Asian American, 11 percent identified as Hispanic or Latino, and nearly 2 percent identified as Native American, Native Hawaiian, or Pacific Islander. Eight percent of the class did not disclose race or ethnicity. Applicants who self-selected more than one race are reflected in the percentages for each of their respective identified races. Racial demographics are accessible only after the admissions process is complete.
First-generation students represent 20 percent of the class. Twenty-one percent are estimated to be eligible for federal Pell Grants. In this first admitted class since the FAS announced a significant undergraduate financial aid expansion in March 2025, 45 percent are attending the College tuition-free. More than half of those students qualified for a free Harvard education with financial aid covering all expenses, including tuition, fees, food, and housing.
The class includes 16 veterans. Twelve transfer students joined the class, and 75 students were admitted from the wait-list.
Among U.S. students, geographical representation is as follows: New England, 17.8 percent; Middle Atlantic, 20.6 percent; Southern U.S., 15.9 percent. The Midwest is represented by 10 percent of the class; the Central region, 1.6 percent; the Mountain region, 3.2 percent; and the Pacific region, 13.6 percent.
In an effort to promote “joint research and development activities of mutual interest and benefit in the area of artificial intelligence technologies,” Chiang Mai University (CMU) has signed a memorandum of understanding (MOU) with IBM to participate in the IBM-NUS Research and Innovation Centre. CMU intends work through the Centre to explore collaborations in Artificial Intelligence (Al) with emphasis on how to establish a cutting-edge Al-centric compute infrastructure, as well as establish an
In an effort to promote “joint research and development activities of mutual interest and benefit in the area of artificial intelligence technologies,” Chiang Mai University (CMU) has signed a memorandum of understanding (MOU) with IBM to participate in the IBM-NUS Research and Innovation Centre. CMU intends work through the Centre to explore collaborations in Artificial Intelligence (Al) with emphasis on how to establish a cutting-edge Al-centric compute infrastructure, as well as establish an innovation agenda that aims to build Thailand’s advanced AI ecosystem.
CMU also signed an MOU with IBM to begin the process of joining the IBM Quantum Network. CMU would be a member of the IBM Quantum Innovation Centre at the National University of Singapore (NUS), which would provide cloud access to IBM quantum computers and resources for research and workforce development.
Another MOU signed between CMU and NUS in August 2025, provides opportunities for this joint research and development, and the sharing of best practices and co-innovation on next-generation AI and quantum technologies.
These MOUs pave the way for NUS and CMU to explore the co-development of research initiatives by leveraging IBM’s infrastructure for their joint research that align with regional priorities and institutional strengths in the areas of AI and quantum computing to address the pressing challenges like climate resilience, sustainable agriculture, and public health of this region.
The efforts between the three parties demonstrate a shared commitment to joint research, strengthening the ecosystem, and talent development, marking a significant step forward in advancing AI and quantum computing innovation in Singapore and Thailand.
Driving Next-Generation AI Innovation
The MOU between IBM and CMU signed for CMU’s participation in the IBM-NUS Research & Innovation Centre includes plans to harness cutting-edge AI infrastructure powered by prototype IBM Spyre Accelerators, part of IBM Research’s AIU ((Artificial Intelligence Unit) family.
This full technology stack—combining state-of-the-art AI software, systems, and accelerators—will enable the efficient tuning and inferencing of fine-tuned models (FMs).
The intent is for CMU to democratise AI across the Thailand ecosystem by driving lower-cost AI solutions to expand accessibility and help bridge the digital divide. It also aims to address pressing regional challenges, such as developing AI-powered geospatial models tailored for Thailand to tackle natural disasters, air pollution (PM 2.5) and flooding.
The MOU between CMU and NUS also outlines how the universities will explore how to advance quantum computing research and innovation. And through its MOU with IBM, CMU intends to join the IBM Quantum Network as a member of NUS’ IBM Quantum Innovation Centre, which gives CMU access to IBM’s fleet of cloud-based quantum computers and resources.
These agreements provide opportunities for joint research and development, the sharing of best practices and co-innovation on next-generation AI and quantum technologies, plans to further explore the integration of AI and quantum capabilities and applying advanced technologies across the FM technology stack to unlock breakthrough applications.
Building Future-Ready Skills and Ecosystems
Through the IBM-NUS Research and Innovation Centre and the IBM Quantum Innovation Centre at NUS, hands-on training and research engagements in AI and quantum computing are being planned to help equip the next generation of CMU’s innovators.
The proposed collaborations underscore the shared vision of fostering a robust AI and quantum ecosystem in Thailand, with ripple effects across ASEAN. By combining leading-edge technologies with research, education, and cross-industry collaboration, NUS and CMU aim to empower organisations, researchers, and policymakers to harness emerging technologies for long-term competitive and societal benefit.
Professor Tan Eng Chye, President of NUS, said: “NUS is delighted to partner Chiang Mai University and IBM to advance AI and quantum science through open, collaborative research, while developing the talent and resources essential for the region’s growth. Working together, we aim to strengthen regional cooperation — including with leading universities in the ASEAN University Network — and translate cutting-edge research into practical, powerful solutions for real world problems. Sustainability is a key priority for NUS, and a particularly exciting focus of this strategic collaboration will be leveraging foundation models to address challenges from climate change to disaster management. We look forward to working closely with Chiang Mai University and IBM to deliver tangible impact for society and industry across ASEAN.
Professor Pongruk Sribanditmongkol, President of CMU, stated, “This proposed collaboration with NUS marks a major leap for Chiang Mai University in elevating our research and innovation capabilities to the global stage. IBM’s quantum technology and AI will be important elements of this future development with NUS. This plans as outlined in the MOUs will not only enable our researchers and students to work with experts and access cutting-edge technologies, but also establish a solid foundation for Thailand to become a leader in deep technology within the region. We are committed to producing high-quality research and graduates who can drive the nation forward.”
Catherine Lian, General Manager and Technology Leader, IBM ASEAN, said, “This proposed relationship between IBM and CMU reflects our commitment to responsible innovation that accelerate progress and empower innovators. By combining world-class research with local talent development, we would be bringing this capability to Thailand through the NUS-CMU partnership, fostering the next breakthroughs in AI-driven and quantum-enabled solutions that would be designed to directly support the region’s competitiveness, resilience, and sustainability.”
Designed to drive UK growth, the Hub will connect entrepreneurs, investors, corporates, and researchers on a 2.7-acre site in Hills Road, in the centre of Cambridge, as the UK’s answer to Boston’s Lab Central and Paris’s Station F. The new facility will support science start ups to grow and compete on the world stage.
The announcement was made as the annual Innovate Cambridge Summit brings together entrepreneurs, investors, policymakers and political leaders this week, and is part of a £500 mil
Designed to drive UK growth, the Hub will connect entrepreneurs, investors, corporates, and researchers on a 2.7-acre site in Hills Road, in the centre of Cambridge, as the UK’s answer to Boston’s Lab Central and Paris’s Station F. The new facility will support science start ups to grow and compete on the world stage.
The announcement was made as the annual Innovate Cambridge Summit brings together entrepreneurs, investors, policymakers and political leaders this week, and is part of a £500 million growth package for new homes, infrastructure and business space for the Oxford to Cambridge Growth Corridor.
It follows a new report that reveals the Cambridge area is now the most investible hub for science, and has had the highest growth of any UK region outside the capital in the last decade.
According to the new report from Beauhurst, Cambridge Enterprise, Innovate Cambridge and Cambridge Innovation Capital, Cambridge is a national economic asset where early-stage life sciences and deep tech companies have raised £7.9billion since 2015. International investors are now involved in nearly 40% of all deals, up from just 7% a decade ago.
Cambridge’s innovation ecosystem has grown by almost 80% in the past decade, from 473 active companies in 2015 to 848 in 2025. Its spinout companies, born from University research, are powering this momentum, with spinouts accounting for 27.9% of all equity raised in the region. Total spinout investment has grown from £46 million in 2015 to £879 million in 2024, with life science spinouts raising an average of £8.4 million each in 2024, the highest for any UK city.
This growth and success have been embodied by Cambridge-born success stories, including CellCentric, a leading clinical-stage biotech developing novel cancer therapeutics; CuspAI, an AI-driven materials discovery company; and Featurespace, a world leader in adaptive behavioural analytics for financial crime prevention.
Science Minister and Oxford-Cambridge Innovation Champion, Lord Vallance, said: “Cambridge is one of the world's most fertile grounds for innovation to take root, and blossom into opportunities for investment, job creation, and progress in fields ranging from life sciences to deep tech.
“As impressive as these figures are, there is still more potential here for us to unleash. This is precisely why we are backing the Cambridge Innovation Hub, as part of our programme of work across Government to boost the entire Oxford to Cambridge Growth Corridor, and fulfil its promise as an economic engine the whole nation benefits from.”
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, said: “Fast-tracking the Innovation Hub will help drive UK growth. It will connect entrepreneurs, investors, corporates, and our world-class researchers. It will quickly become Europe’s leading destination for early-stage deep tech and life sciences companies, and means Cambridge will continue to be a global leader in research and innovation. The world is coming to Cambridge for science. Government support means that work will now start at pace to make the Innovation Hub a reality."
Professor Andy Neely OBE, Chair of Innovate Cambridge, said: “Cambridge’s science and innovation ecosystem is one of the UK’s greatest economic assets. The data shows that the world is increasingly looking to Cambridge to find the breakthrough ideas that can change lives and drive global progress.”
Dr Kathryn Chapman, Executive Director, Innovate Cambridge, said: “The Summit is a chance to demonstrate how Cambridge continues to lead on innovation worldwide. Recognition as the fastest-growing UK hub for science investment, combined with cornerstone funding for a new international innovation hub, reflects the success of our unified vision and commitment to building a truly global innovation economy.”
This latest report follows Dealroom data published earlier this year which showed Cambridge was #1 in Europe for deep tech VC per capita, and was second globally to only the Bay Area when it came to unicorns per capita.
The Cambridge Innovation Hub has received cornerstone Government funding of at least £15 million to maintain the city’s position as a global leader in innovation.
The world is coming to Cambridge for science. Government support means that work will now start at pace to make the Innovation Hub a reality.
Using a new technique for imaging embryos in real time, a team led by scientists at the Loke Centre for Trophoblast Research, University of Cambridge, showed that abnormalities can arise at a later stage of embryo development than previously thought. This means that the tests used in some clinics may be finding errors in cells that will go on to develop into the placenta – and abnormalities in placental cells are less likely to affect the health of the fetus.
When an egg has been fertilised by
Using a new technique for imaging embryos in real time, a team led by scientists at the Loke Centre for Trophoblast Research, University of Cambridge, showed that abnormalities can arise at a later stage of embryo development than previously thought. This means that the tests used in some clinics may be finding errors in cells that will go on to develop into the placenta – and abnormalities in placental cells are less likely to affect the health of the fetus.
When an egg has been fertilised by a sperm, it is known as a zygote. This then divides, each new cell taking a copy of each parent’s DNA. Each of these cells itself divides and copies, and this process occurs repeatedly, developing into a hollow ball of cells, a stage of the embryo known as a blastocyst.
Around six days after fertilisation, the blastocyst implants itself into the lining of the uterus. Its outer cells develop into the placenta, a temporary organ that ensures the embryo – and the fetus into which it develops – receives the correct levels of nutrition and hormones necessary for its growth.
Assisted conception technologies are increasingly widespread due to a range of factors. These technologies involve fertilising eggs in the clinic and transferring the blastocyst into the uterus. However, before transfer, many clinics test the embryos for aneuploidy, where some cells in the embryo have the wrong number of chromosomes. When abnormalities are detected, the embryo may be deemed inviable and discarded, meaning patients may need to go through another cycle of treatment, which can prove costly.
So-called pre-implantation genetic testing for aneuploidy is a treatment ‘add on’ that may be offered to older women and those with a history of recurrent miscarriages or multiple IVF failures.
Researchers at the Loke Centre for Trophoblast Research, Cambridge, are interested in how early human embryos develop before implantation in the womb. This is because in assisted conceived, as many as nine in ten embryos fail to develop to a stage where they can be transferred to the womb.
“Having a baby through assisted conception can be very challenging,” said Professor Kathy Niakan, Director of the Loke Centre for Trophoblast Research and Co-Chair of Cambridge Reproduction. “Most embryos fail to develop or to implant, and even those that are good quality may not be transferred. Much more basic research is needed to inform future clinical practice and improve rates of assisted conception.”
To help understand development of the embryo at this early stage, Professor Niakan and colleagues, in collaboration with researchers at the Francis Crick Institute, developed a new, state-of-the-art method for watching embryos live in high resolution. The details are published today in Nature Biotechnology.
The new imaging technique involves tagging DNA inside the cell nucleus with a fluorescent protein, making it visible under a microscope. The researchers then use an imaging technique known as light-sheet microscopy to observe the embryos in 3D as they developed without damaging them.
First author Dr Ahmed Abdelbaki, a postdoctoral researcher at the Loke Centre for Trophoblast Research, said: “This is the first time that this very gentle method, with much higher resolution, has been used. It meant that we could watch the embryos as they developed over a two-day period, the longest continuous time that this process has been observed.”
Co-author Professor Ben Steventon from the Department of Genetics at the University of Cambridge said: “The unique design of the microscope allows for multiple precious embryos to be watched simultaneously, and from both sides. This has allowed the team to catch events that have previously been missed. It’s a proof of the power of direct observation to uncover unexpected findings in human biology.”
The technique led to a very unexpected finding.
Professor Niakan, the study’s senior author, said: “We were extremely surprised to discover that abnormal cell divisions can happen from scratch at a very late stage of human development. It was only by using a new imaging technique that it was possible to see this happening.”
Of the 13 embryos analysed by the team, 10% of the cells contained chromosomal abnormalities. These arose from problems when DNA is being copied between cells, for example when chromosomes did not move properly during division or when a cell divides into three, rather than two.
Because these abnormalities arise at a relatively late stage of the embryo’s development, they appear in the outer layer of the blastocyst, which develops into the placenta – and it is from this layer that biopsies are taken for pre-genetic testing for aneuploidy.
Professor Niakan’s team is now studying cells in the inner layer to see whether such spontaneous abnormalities can also arise there.
The embryos used in this study were donated by families who had had successful pregnancies. The families were treated at Bourn Hall Clinic and Create Fertility.
The research was supported by Wellcome, the Francis Crick Institute (which receives core funding from Cancer Research UK, the Medical Research Council and Wellcome) and the Engineering and Physical Sciences Research Council.
A test deployed in many fertility clinics to assess the viability of embryos for use in IVF is likely to overestimate the number of embryos with abnormalities, suggests a study published today.
Most embryos fail to develop or to implant, and even those that are good quality may not be transferred. Much more basic research is needed to inform future clinical practice and improve rates of assisted conception
By Dr Azhar Ibrahim Alwee, Senior Lecturer in the Dept of Malay Studies at the Faculty of Arts and Social Sciences at NUSBerita Minggu, 19 October 2025, p11
Artificial intelligence (AI) is already being used in medicine. Computer scientist Julia Vogt explains how AI can support doctors and where human expertise remains irreplaceable.
Artificial intelligence (AI) is already being used in medicine. Computer scientist Julia Vogt explains how AI can support doctors and where human expertise remains irreplaceable.
By Prof Simon Chesterman, David Marshall Professor of Law, Vice Provost (Educational Innovation), and Dean of NUS College; and Prof Chen Tsuhan, Distinguished Professor of Computer Science and former Deputy President (Research and Technology); both at NUSThe Straits Times, 20 October 2025, Opinion, pB2
By Prof Simon Chesterman, David Marshall Professor of Law, Vice Provost (Educational Innovation), and Dean of NUS College; and Prof Chen Tsuhan, Distinguished Professor of Computer Science and former Deputy President (Research and Technology); both at NUS
By Prof Lawrence Loh, Director, and Ms Ang Hui Min, Senior Manager (Research and Communications), both at the Centre for Governance and Sustainability at NUS Business SchoolThe Business Times, 17 October 2025, p14
By Prof Lawrence Loh, Director, and Ms Ang Hui Min, Senior Manager (Research and Communications), both at the Centre for Governance and Sustainability at NUS Business School
CNA Online, 16 October 20258world Online, 16 October 2025Vasantham News Online, 16 October 2025Lianhe Zaobao, 17 October 2025, News, p1Berita Harian, 17 October 2025, p2
By Dr Mathew Mathews, Head of Social Lab and Principal Research Fellow, and Mr Melvin Tay, Research Fellow, both from the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUSThe Straits Times, 18 October 2025, Opinion, pB3
By Dr Mathew Mathews, Head of Social Lab and Principal Research Fellow, and Mr Melvin Tay, Research Fellow, both from the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUS
Getty Images
Health
Step study: 4,000 counts for a lot
Study of older women finds lower disease risk for those who hit that number once or twice a week
Mass General Brigham Communications
October 22, 2025
3 min read
How many steps do you need to reap health benefits?
A new study by investigators from Harvard and Mass General Brigham examined 13,547 older women, comparing their step counts over a
Study of older women finds lower disease risk for those who hit that number once or twice a week
Mass General Brigham Communications
3 min read
How many steps do you need to reap health benefits?
A new study by investigators from Harvard and Mass General Brigham examined 13,547 older women, comparing their step counts over a one-week period against their mortality and cardiovascular disease rates over the next decade. The researchers found that achieving just 4,000 steps one or two days per week was associated with lower risk of mortality and cardiovascular disease — and with more steps came even greater benefits, up to a point when risk reductions leveled. The results were published in the British Journal of Sports Medicine.
The health benefits seem to be associated with the total volume of steps taken, rather than how many days per week a particular threshold was achieved.
“In countries like the United States, advances in technology have made it such that we don’t really move very much, and older individuals are among those least active,” said senior author I-Min Lee, an epidemiologist at Mass General and at the Harvard Chan School. “Because of today’s low step counts, it’s increasingly important to determine the minimum amount of physical activity required to improve health outcomes, so that we can offer realistic and feasible goals for the public.”
In this federally funded study, researchers conducted a prospective cohort study of 13,574 older women (71.8 years old on average) without cardiovascular disease or cancer from the long-running Women’s Health Study. The women wore ActiGraph GT3X+ accelerometers to track their steps over seven days between 2011 and 2015. For the next 10 years, the researchers monitored mortality and cardiovascular disease incidence.
Participants were sorted by how many days per week they achieved step thresholds at or above 4,000, 5,000, 6,000, or 7,000. Those who got 4,000 steps one or two days per week had 26 percent lower mortality risk and 27 percent lower cardiovascular disease risk compared to those who never hit 4,000 on any day. What’s more, reaching 4,000 steps three or more days in a week decreased mortality risk further to 40 percent. As for women who reached the higher step thresholds, cardiovascular disease risk leveled out.
Interestingly, the health benefits seem to be associated with the total volume of steps taken, rather than how many days per week a particular threshold was achieved. This suggests that there isn’t a “better” way to get steps — women with a similar total volume of steps, achieved either by consistent steps throughout the week or sporadic steps in just a few days, had similar health benefits.
Future research will need to explore whether these effects hold in populations beyond older, mostly white women in the U.S. Additionally, the researchers are curious to analyze even lower step count thresholds to determine whether fewer than 4,000 steps can produce similar health benefits.
“I hope our findings encourage the addition of step count metrics to physical activity guidelines, including the upcoming 2028 U.S. Physical Activity Guidelines,” said lead and corresponding author Rikuta Hamaya of Harvard and Brigham and Women’s Hospital. “If we can promote taking at least 4,000 steps once per week in older women, we could reduce mortality and cardiovascular disease risk across the country.”
Work & Economy
Shielding Americans from corporate ‘tyranny’
Niles Singer/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
October 22, 2025
5 min read
Former FTC chair Lina Khan highlights agency’s role in checking concentrated economic power
Antitrust enforcement is not a way for the federal government to penalize highly successful businesses, but rather part of a
Former FTC chair Lina Khan highlights agency’s role in checking concentrated economic power
Antitrust enforcement is not a way for the federal government to penalize highly successful businesses, but rather part of a long tradition of protecting Americans from a kind of economic “tyranny” that can arise when large companies are permitted to overwhelm the commercial landscape, argues legal scholar Lina Khan.
“Anti-monopoly is a governing philosophy that broadly views concentrations of power as a threat to freedom, recognizing that tyranny comes in many guises,” said Khan while delivering the 2025 Stone Lecture on Economic Inequality last week at Harvard Kennedy School. “Just as the Constitution creates checks and balances in our government, safeguarding against concentrated political control, anti-monopoly laws create checks against concentrations of economic power.”
Khan — who served as chair of the Federal Trade Commission from 2021 to January of this year — spoke about the role federal agencies, such as the FTC, can and should play to ensure we don’t repeat the mistakes of the Industrial Revolution. That economic period, she said, concentrated wealth and prompted the establishment of the Interstate Commerce Act of 1887 and the Sherman Antitrust Act of 1890, two of the most important anti-monopoly laws.
During the lecture, and follow-up discussion with Jason Furman, Aetna Professor of the Practice of Economic Policy at the Kennedy School and Harvard’s Department of Economics, Khan talked about her time leading the FTC and the role antitrust regulation can play to remediate the harms that occur when economic power is concentrated in the hands of a few.
While at the FTC, Khan tried to eliminate what she called its “self-imposed red tape and excessive bureaucracy” that had accumulated since the Reagan administration, when anti-regulatory politics first took off. Many internal procedures went above and beyond what the law itself required, further slowing agency actions, said Khan, a member of the Columbia University Law School faculty.
“Just as the Constitution creates checks and balances in our government, anti-monopoly laws create checks against concentrations of economic power.”
Khan first garnered national attention in 2017 for an article she wrote as a 3L at Yale Law School in which she argued that U.S. antitrust regulations were not up to the task of keeping online tech giants like Amazon from becoming monopolies.
Her selection to chair the Federal Trade Commission in 2021 during the Biden administration was controversial and criticized by some in the business community, particularly among Big Tech companies like Google and Amazon who felt the agency’s aggressive approach to antitrust regulation unfairly singled them out.
Not so, Khan said. The FTC’s focus on antitrust enforcement reflected the agency’s larger goal of protecting the millions of consumers who can be hurt when companies are able to control the marketplace, but who have the least power to do something about it.
“We ended up reorienting the FTC’s approach to prioritize tackling misconduct that was resulting in the greatest harm for the greatest number of people. Practically, this meant taking on law breaking by some of the largest corporate players in the most significant sectors of our economy, including dominant providers in agriculture, healthcare, and technology,” said Khan.
During her tenure, the FTC went after firms that falsely claimed their products were made in America, enforced laws that forbade non-compete clauses preventing employees from changing jobs or launching new businesses, issued new rules that banned fake online product reviews, and required companies to make it easy to cancel online subscriptions.
Industry consolidations can have a ripple effect beyond what they do to competitors. When scrutinizing mergers, the FTC looked at the effect it would have not just on customers but on the people who would be working at the new organization, she said.
“One of the lawsuits that we filed blocking the Kroger-Albertsons [supermarket] merger was the first time the government had alleged that a merger was anti-competitive, not just because it would raise prices for shoppers, but also because it would worsen conditions for workers,” an argument the judge recognized as a legitimate basis for anti-competitive mergers, Khan said.
“So there has been some more progress; I think there certainly needs to be more.”
There used to be more fixed rules about what companies could and could not do, particularly around mergers, in order to prevent firms from exceeding a certain level of market concentration. While those rules still exist, the federal government’s approach to antitrust is “much more squishy” than it used to be, Khan said.
The old, structured approach had once been derided by many as needlessly rigid so that “productive” business consolidations would get blocked. But even judges have become frustrated with the current approach, which leaves them with too much discretion, Khan said.
“So, I think there’s a lot of room for improvement in the current antitrust regime, even just from an efficiency perspective,” she said. “One way you could help fix for that is creating more certainty and less discretion in some of the rules, which would also, candidly, be an important safeguard against political weaponization and abuse.”
Barb Kempken, director for residential dining.Niles Singer/Harvard Staff Photographer
Campus & Community
All good, except grape pizza
University Dining Services directors talk menus, special diets, financial and practical challenges of serving up 2.9 million meals per year
Anna Lamb
Harvard Staff Writer
October 22, 2025
4 min read
The diners have spoken: Lose the grape pizza.
“Ther
University Dining Services directors talk menus, special diets, financial and practical challenges of serving up 2.9 million meals per year
Anna Lamb
Harvard Staff Writer
4 min read
The diners have spoken: Lose the grape pizza.
“There was some real love out there for it,” said Barb Kempken, director for residential dining. “It had an interesting favorite flavor profile. But people had very strong feelings about it.”
Harvard University Dining Services’ Kempken and Crista Martin, director for strategic initiatives and communication, detailed in a recent panel discussion all that goes into planning, procuring, and preparing daily meals for thousands of College students, professors, and hundreds of staff.
“We have about 6,700 students on the meal plan. Then you add proctors, tutors, faculty, and deans, so about 7,000 people are eating,” Kempken said. “And obviously everybody has an opinion about food and what they’d like to see and what should be on menu.”
According to Kempken, the University Dining Services, or HUDS, serves 13,000 to 15,000 meals daily, across 14 dining halls. Each of Harvard’s 12 undergraduate Houses has its own dining hall, in addition to services in Annenberg Hall and Harvard Hillel.
Last year, Kempken said, residential dining served more than 2.9 million meals.
“And I say this with great pride,” she said.
All dining halls serve the same menu, which Kempken says rotates every three weeks.
So what exactly goes into deciding what food gets served up every week? First, both Kempken and Martin said, menus must be feasible.
“Menus have to be written thinking about, what can we source reliably? What do we have kitchen equipment for? You don’t want to write everything coming out of the fryer one day when you have one fryer in a House,” Kempken explained. “We also take into account what people want to eat. We try to work in the international cuisines, but also food that our teams can prepare as well.”
“When we serve dinner tonight, we need 1,000 pounds of fish. We also need 700 pounds of whatever the vegetable is that goes with that fish.”
Crista Martin
Martin added that the food has to be attainable from suppliers at both the quantity and cost needed. HUDS tries to keep its food costs capped at roughly 35 percent of the revenue generated by dining plans, she said.
“When we serve dinner tonight, we need 1,000 pounds of fish,” Martin said. “We also need 700 pounds of whatever the vegetable is that goes with that fish. That’s two ingredients, not the 100 we need at dinner. So imagine when you spread that out across 100 items across three meals a day.”
HUDS procures food and ingredients from a mix of local and national sources. According to Kempken, the service buys ingredients from more than 250 farms in Massachusetts alone.
And, she added, they contract with local businesses for premade items. HUDS works with a soup company in Chelsea and is in the process of signing a deal for bread with a bakery in Malden.
Kempken said all the house chefs and HUDS staff use data from the “text and tell” comment line along with “acceptability factors” when devising menus and deciding what items will stay or go.
“For example, if we have a certain item on the menu, what percentage was that item taken?” she said. “Most of the big overhaul we do over the summer. We begin looking at information in April or May and then we start making tweaks to the menu.”
As for the special dietary considerations, Kempken said HUDS tries to accommodate allergies, kosher, halal, and plant-based diets despite challenges.
“We don’t get a giant pot of money. The meal plan is what pays for our hourly associates, the food, the equipment, everything,” she said. “And where do you find the space?”
Kosher meals are available daily at Annenberg and Pforzheimer House and Halal meals at Adams, Cabot, and Annenberg. There are plant-based options available at every hall.
But at the end of the day, Kempken said, HUDS continues to try new strategies to please the masses. (Just not grape pizza.)
On Oct. 20 during its annual meeting, the National Academy of Medicine announced the election of 100 new members, including MIT faculty members Dina Katabi and Facundo Batista, along with three additional MIT alumni.Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine, recognizing individuals who have demonstrated outstanding professional achievement and commitment to service.Facundo Batista is the associate director and
On Oct. 20 during its annual meeting, the National Academy of Medicine announced the election of 100 new members, including MIT faculty members Dina Katabi and Facundo Batista, along with three additional MIT alumni.
Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine, recognizing individuals who have demonstrated outstanding professional achievement and commitment to service.
Facundo Batista is the associate director and scientific director of the Ragon Institute of MGH, MIT and Harvard, as well as the first Phillip T. and Susan M. Ragon Professor in the MIT Department of Biology. The National Academy of Medicine recognized Batista for “his work unraveling the biology of antibody-producing B cells to better understand how our body’s immune systems responds to infectious disease.” More recently, Facundo’s research has advanced preclinical vaccine and therapeutic development for globally important diseases including HIV, malaria, and influenza.
Batista earned a PhD from the International School of Advanced Studies and established his lab in 2002 as a member of the Francis Crick Institute (formerly the London Research Institute), simultaneously holding a professorship at Imperial College London. In 2016, he joined the Ragon Institute to pursue a new research program applying his expertise in B cells and antibody responses to vaccine development, and preclinical vaccinology for diseases including SARS-CoV-2 and HIV. Batista is an elected fellow or member of the U.K. Academy of Medical Sciences, the American Academy of Microbiology, the Academia de Ciencias de América Latina, and the European Molecular Biology Organization, and he is chief editor of The EMBO Journal.
Dina Katabi SM ’99, PhD ’03 is the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science at MIT. Her research spans digital health, wireless sensing, mobile computing, machine learning, and computer vision. Katabi’s contributions include efficient communication protocols for the internet, advanced contactless biosensors, and novel AI models that interpret physiological signals. The NAM recognized Katabi for “pioneering digital health technology that enables non-invasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. She has translated this technology to advance objective, sensitive measures of disease trajectory and treatment response in clinical trials.”
Katabi is director of the MIT Center for Wireless Networks and Mobile Computing. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), where she leads the Networks at MIT Research Group. Katabi received a bachelor’s degree from the University of Damascus and MS and PhD degrees in computer science from MIT. She is a MacArthur Fellow; a member of the American Academy of Arts and Sciences, National Academy of Sciences, and National Academy of Engineering; and a recipient of the ACM Computing Prize.
Additional MIT alumni who were elected to the NAM for 2025 are:
Christopher S. Chen SM ’93, PhD ’97, an alumnus of the Department of Mechanical Engineering and the Harvard-MIT Program in Health Sciences and Technology;
Michael E. Matheny SM ’06, an alumnus of the Harvard-MIT Program in Health Sciences and Technology; and
Rebecca R. Richards-Kortum SM ’87, PhD ’90, and alumna of the Department of Physics and the Harvard-MIT Program in Health Sciences and Technology.
Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy, and inspires positive actions across sectors.
“I am deeply honored to welcome these extraordinary health and medicine leaders and researchers into the National Academy of Medicine,” says NAM President Victor J. Dzau. “Their demonstrated excellence in tackling public health challenges, leading major discoveries, improving health care, advancing health policy, and addressing health equity will critically strengthen our collective ability to tackle the most pressing health challenges of our time.”
Arts & Culture
Cracks in America’s ‘mirror’
Former Kennedy Center president urges steps to preserve vitality of the arts
Eileen O’Grady
Harvard Staff Writer
October 22, 2025
4 min read
Deborah Rutter.Veasey Conway/Harvard Staff Photographer
Deborah Rutter believes living an “artful life” is essential to well-being.
The former president of the John F. Kennedy Center for the Performi
Former Kennedy Center president urges steps to preserve vitality of the arts
Eileen O’Grady
Harvard Staff Writer
4 min read
Deborah Rutter.
Veasey Conway/Harvard Staff Photographer
Deborah Rutter believes living an “artful life” is essential to well-being.
The former president of the John F. Kennedy Center for the Performing Arts said that means recognizing that everything that makes daily life beautiful — from the music playing in the grocery store to the architecture of the buildings along your commute — exists because of artists.
“I have a personal belief that artists hold up a mirror to who we are as human beings and who our society is.”
Deborah Rutter
“I have a personal belief that artists hold up a mirror to who we are as human beings and who our society is,” Rutter said. “Our artists are telling the story of what’s happening today. They are the truth-tellers.”
Rutter visited campus recently at the invitation of the Office for the Arts and the Harvard Kennedy School’s Culture and Civil Society Initiative. Her talk was part of an inaugural event called Culture in Action: Building Democracy Through the Arts.
“The series is really about thinking about how all art and culture is vital, and thinking about a healthy and vibrant democracy, and having conversations with national arts policy leaders around the connection between arts and democracy,” Office for the Arts Director Fiona Coffey told the audience.
These days, much of Rutter’s hope for the arts lies in local communities. Though the state of the arts feels “bleak” amid the federal administration’s cuts in funding for the National Endowment for the Arts, she said, performing arts organizations and venues of all sizes across the country continue to offer a wide range of opportunities. Rutter was dismissed by President Trump in February as part of a controversial leadership overhaul of the national cultural center of the U.S.
“The work does continue to happen, and it must continue to happen, and it is vital that the arts leaders who are providing those platforms continue to provide the good, safe, strong environment for artists to do that work,” said Rutter, who is now vice provost for the arts at Duke University.
Rutter said she would like to see arts become a regular part of the school day for all K-12 students. It’s a policy, she said, that’s key to protecting the arts and ensuring their longevity.
“There are so many stories of individuals who say, ‘My life was saved because of my drama teacher or my music teacher,’” Rutter said.
She also championed a base living wage and access to healthcare for all professional artists, something she said would provide the security necessary for them to create without taking on multiple side gigs.
“Why is a restaurant more important than an art gallery?” Rutter said. “They both are a piece of the economy. Why is a sports team more important than a dance company? This is where the policy comes in, and the value and appreciation. [John F.] Kennedy spoke about the value of the artist and how art is how we remember a society. When you think back on history, a lot of that is based on the culture of that time.”
“Empathy is critical to survival.”
Deborah Rutter
During a Q&A, one audience member asked Rutter how she sees humanities linked to arts. Rutter said both help build the muscle of empathy by offering new perspectives.
“Empathy is critical to survival,” Rutter said. “We need to understand one another. We don’t have to agree with one another, but you have to have a certain level of empathy. And empathy is something that you learn by experiencing and seeing the world through others’ eyes.”
Arts & Culture
‘Harvard Thinking’: Is there a right way to write?
Illustrations by Liz Zonarich/Harvard Staff
Samantha Laine Perfas
Harvard Staff Writer
October 22, 2025
long read
In podcast, pros share tips on technique, process — and tapping ‘deepest part of yourself, even if you’re writing something that is set on a spaceship’
Writers draw upon a variety of technical skills to t
‘Harvard Thinking’: Is there a right way to write?
Illustrations by Liz Zonarich/Harvard Staff
Samantha Laine Perfas
Harvard Staff Writer
long read
In podcast, pros share tips on technique, process — and tapping ‘deepest part of yourself, even if you’re writing something that is set on a spaceship’
Writers draw upon a variety of technical skills to tell a story, such as sentence structure, character development, literary devices, and narrative techniques. But it’s an inherently personal process.
“I think you are writing from the deepest part of yourself, even if you’re writing something that is set on a spaceship,” Lauren Groff, an award-winning novelist and 2018-2019 Radcliffe fellow said. “That is a lot of very hard work for me, and it takes me a really long time to work myself into it.”
James Wood, Professor of the Practice of Literary Criticism at Harvard and a literary critic for The New Yorker, said that when he’s reviewing work, he looks for a sense of vitality. “The thing that we’re trying to feel and find is just something very alive that draws you in,” he said.
“I find writing begins with a particular character and then just figuring out what that character wants and what is getting in the way of what that character wants,” said Nick White, Associate Senior Lecturer on Fiction.
Sam Marks, Senior Lecturer on Playwriting, said that a writing teacher once told him that you can’t make anyone a better writer, you can only help them get closer to their obsessions. “It’s like we know these places are in our souls or our bones,” he said. “And I think those are assets as a writer, not bad things.”
In this episode of “Harvard Thinking,” host Samantha Laine Perfas talks with Groff, Wood, White, and Marks about their writing process — and tips for other writers to hone their own craft.
Lauren Groff: If you want to write something that’s going to affect people emotionally, you have to do it emotionally.
Nick White: And it has to cost you more than the time you’re spending writing. It pushes me to my emotional and intellectual capabilities. I feel like when something is working it is because all cylinders are firing, and I am working at the very bleeding edge of what I am capable of.
Samantha Laine Perfas: Storytelling is a huge part of the human experience. But how do you tell a good story? There’s the cliche of a writer sitting at a desk, wrestling with a page, trying to find their ever-evasive muse. There are elements of craft to consider for sure, but for many authors, creativity comes from a place deep within themselves, and it looks different from writer to writer.
So what’s the secret behind an unforgettable story?
Welcome to “Harvard Thinking,” a podcast where the life of the mind meets everyday life. Today I’m joined by:
James Wood: James Wood. I’m a professor of the practice of literary criticism at Harvard in the English department.
Laine Perfas: He’s also a literary critic for The New Yorker and the author of multiple books, including “How Fiction Works.” Then:
Marks: Sam Marks. I am a senior lecturer in the English department in playwriting, and I also teach a TV writing class.
Laine Perfas: For the past three years, he was also the director of Creative Writing. Then:
Groff: Lauren Groff. I am a novelist and short story writer, and I own a bookstore in Gainesville, Florida.
Laine Perfas: She was a 2018-2019 Radcliffe fellow, and three of her novels have been finalists for the National Book Award. And our final guest:
White: Nick White. I’m a short story writer and a novelist.
Laine Perfas: He is also an associate senior lecturer on fiction at Harvard.
And I’m Samantha Laine Perfas, your host and a writer for The Harvard Gazette. Today we’ll talk about storytelling with four writers and how they take an idea and bring it to life.
What is it that gets a story started for you? Is it an image, a character? Where do you begin?
Groff: It depends on the situation and the story that’s being told. Sometimes it’ll be something that I’ve been thinking about for a very long time. It doesn’t become a story until I experience something, or I read something else that collides with the initial idea that sort of blooms into a story with urgency and density and gravity and weight; and after that happens, it takes a much longer time to build in the subconscious.
Marks: I’m often thinking about a thing, an idea or a character or a moment or an experience, for a long time. But I think that the story itself doesn’t actually happen until I start writing it. That’s when it actually unfolds. I found that generally writing is not successful for me if I’m like, “Oh, and then this crazy twist happens.” That’s usually a disaster; and so it’s a combination of something I’ve been thinking about for a long time and also the immediate thread of whatever is going on. It’s in some ways — I’m not a musician — but like a musician improvising, like you’re searching for the right note or lick. And then you find the harmony, and you kind of take it where it goes. I apologize to all the musicians out there who are like, “This guy doesn’t know what he’s talking about.”
“It’s in some ways like a musician improvising, like you’re searching for the right note or lick.”
Sam Marks
White: I think I do a mix of both. There are still stories that I want to write that I’ve been thinking about for years that I just haven’t figured out yet how to bring to the page. And then there are stories that come more easily to me. And then once I have what Anne Lamott calls that “shitty first draft,” I have something I can work with and shape once it’s outside of my head, because I think all my stories sound good to me when they’re inside my head. And then when I put them in the cold, harsh light of the page and begin to go back and inspect it, I can see all its weaknesses, but then I have something tangible that I can work with. I think speaking in terms of images or characters, oftentimes for me, I find writing begins with a particular character and then just figuring out what that character wants and what is getting in the way of what that character wants.
Wood: I like this question because if I think of beginnings, literally for me, knowing what a first line might be or knowing what an ending might be is very helpful to me. I have much less experience than the other panelists in actually writing stories. For instance, I’ve never written a short story in my life, don’t think I could. But if I expand slightly the definition of a narrative to be, say, a nonfiction account or even stretching it a little bit here, even a review, it can be very helpful to me sometimes, even if I don’t know what’s going to go in the middle of something, to know where I’m beginning and where I’m ending, and sometimes just knowing where I’m ending, I can do it.
Laine Perfas: Nick, you mentioned characters. There’s this idea out there about how you might think of a character, but then once they exist, they take on a life of their own and they start to do things that you maybe didn’t think they were going to do when you started. Do any of you find that to be true, or in what ways do you wrestle with the character development within your stories?
White: I love when that happens. I’m working on a novel right now. I have things in the draft broadly outlined, but the sort of scenes that I begin to write sometimes, especially when I have two or more characters in a scene, it can sometimes — especially in those first drafts — feel a little bit like a science experiment of seeing how these characters are going to spark off of each other or putting the character in a particular situation.
It can be really exciting to go with it and have a kind of looseness. I’m currently reading that book, “Toni at Random,” which is about Toni Morrison being an editor. And there’s somewhere in the book where they talk about how she thinks about characters when she’s writing as whispering over her ear and telling her, like, “Oh, that’s just right. That’s right. Oh, nope, that’s not good enough. That’s not good enough.” I wish I could experience characters like that. I think that’s a beautiful way of thinking about it.
Marks: A lot of times when I write plays, the characters — I don’t see them, I hear them. It’s an aural form and I often feel like they’ve appeared in my head or my imagination or whatever, the play, for a reason, but I might not know what it is. And so it’s: Let’s see where we all go together. I think it also varies. Sometimes characters are incredibly strong and they’re very clear and you have a really good sense of them. And then sometimes they’re more hazy and you have to spend more time with them and chip away or see how they reveal themselves to you.
Groff: Often we think of character as a fixed idea that we have of specific people, but everyone is an animal. And so I think a character can shift radically depending on the environment, but you can have the same people having a breakup scene, but if you have it on a beach versus on the top of Mount Everest, it’s going to go very differently. So paying attention to not only whatever is the inflexible center of what your conception of this person is, but also paying attention to the true animal nature of being a person in the world is really important at the same time.
Laine Perfas: Lauren, there was an interview that you did and you were describing your creative process as a nuclear fusion. I was wondering if you could talk a little bit more about that and what exactly that means for you.
Groff: Yeah, it’s really like my literature is born out of literature, right? And I think it’s possible to write a novel without ever having read a novel, but it’s very vanishingly rare, and it’d probably be a very bad book. Just my opinion, James can tell us whether or not that’s true. But I think if you have steeped yourself in literature, in ideas, in other people’s voices, and you’re sort of listening to the ghost of George Eliot and Charles Dickens and Toni Morrison and Langston Hughes, right, you’re going through your day listening to these things, of course it’s going to be nuclear, right? Because your ideas are always going to explode when they come in contact with other people. So I think that is one of the great and moving joys and mysteries and beauties of living this life of writing. You are constantly surrounded by explosive ideas of other people.
Marks: Can I just go back to the character thing? Aristotle basically — who wrote the “Poetics,” which is a doctrine on how to write plays in a very specific way, but very influential — puts character subservient to action. Anything that the character does that doesn’t feed the action to him is extraneous. Now, I don’t think that we live in that model anymore, but there’s a spectrum of character-driven versus plot-driven. And I think sometimes it seems dishonest if a character is shoehorned into the plot. But then also sometimes there’s the experience of listening to characters where you’re like, “I don’t care, what’s the story?” So I think it’s a balance.
Wood: But Sam, the thing about the action and character breaking apart, actually if you think of a great deal of the kind of TV writing that I’m sure you would never do, but that it’s just the stuff that’s out there now, an enormous amount of it is just simply plotting for the sake of plotting with ridiculous twists and turns that even within the confines of the thriller genre don’t really make any sense at all. And you’ll sometimes encounter a piece by a student, and it’ll be trundling along, and then suddenly, in effect, they get into a spaceship and go off to Mars, and you say, “Now why did you do that thing on Page 8?” And they’ll just say, “I don’t know. Because I can, I just felt like it.” I think, OK, this is a bit of a problem.
Marks: I fully agree with that. I think that for so many students, and so many writers in general, sometimes I think about writing as a process of confronting how you confront challenge — often it’s like, I want to quit. I want to take a break. I want to have a drink. I want to have a spaceship, right? I want someone to rescue me from the process of having to figure what the heck I have to do, because to figure it out is incredibly painstaking. And that’s the cliche of the writer tearing their hair out. But it’s because it is hard to figure out. What happens next?
White: I’m now thinking of the story that ends with a character going off in a spaceship, and what my response to the student would be: The story begins when they enter the spaceship and go off. Like, why are they? That’s the beginning.
I also liked what you said about a lot of times how we deal with conflict in our everyday lives is related to how we do that in our writerly sense. I can feel that, but I also feel that there are ways in which I allow my characters to be much braver on the page than I am and much bolder. And I think that is a really great way to think about fiction and inhabiting characters. Have your characters do the thing that you’re afraid to do. I think that is a really interesting prompt.
Laine Perfas: That actually makes me think: We’ve talked about character-driven; we’ve talked about plot-driven. I’m also curious how much of it is soul-of-the-writer driven. How much of yourselves do you feel like you are putting into these stories?
White: My first novel was very much autobiographical, but it was also kind of an alternative history for myself. It dealt a lot with conversion therapy and being gay from Mississippi and having a family that would not be accepting and dealing with all of that mess and trauma on the page in a way that I felt unable to do in my real life. I’ve been thinking a lot about this because I’ve been talking to my students a lot about setting and place, and I find it really ironic that I spent so much of my time growing up in Mississippi — I’m from a really small town called Possumneck — and I spent my whole life waiting to leave, couldn’t wait to leave; and as soon as I leave the state, I immediately start trying to write my way back in. I do feel that tension a lot in my work. And I still set things in Mississippi, and I go back and visit a lot, and that place has been very fruitful for whatever the river of my imagination is.
Marks: I once had a writing teacher, Carole Maso, who’s a fiction writer, who said you can’t make anyone a better writer. You can only help them get closer to their obsessions. And I feel like the more I write — not all of my plays are set in Harvard or New York, but even if I try to write something differently, I inevitably write about a version of the same thing, which I don’t think is bad. In some ways I feel like I’ve succeeded on some level. Not to say that you repeat yourself, but that you’re working out some of the same questions. And I think that those are questions of the self, even if they take place in different settings. I also think it’s interesting that you keep writing about Mississippi. It’s like we know these places are in our souls or our bones. And I think those are assets as a writer, not bad things.
Groff: I think you are writing from the deepest part of yourself, even if you’re writing something that is set on a spaceship. Something that is so far from your lived experience, it’s still autobiographical in a very real way, even if the contours of the story don’t accord to the contours of your life, because it’s so deeply personal. Every character is a prismatic hologram of who you are. I love to play with the titration of closeness that I’m giving the reader. That this is a joyous thing to play with, the trick of writing something that is beyond your autobiographical details, but it still really feels deeply personal. That is a lot of very hard work for me, and it takes me a really long time to work myself into it. And I do it through the music of the line, right? I can only do it by playing and failing vastly so often. And then finally finding a piece of music that corresponds to this story that I’m trying to tell. But I think all of it is the soul, if you’re doing it right.
“All of it is the soul, if you’re doing it right.”
Lauren Groff
Wood: Lauren, I have a question for you. You’ve been writing in historical periods remote-ish from the contemporary recently. And whether you’d had the experience of recognizing an autobiographical motif or impulse, even as you seemed to be in a completely different fictional universe.
Groff: Yeah. Marie de France, c’est moi. I mean, every grandiose idea she had is my idea. That’s me right there. And the girl in “The Vaster Wilds,” that’s me as a panicked young person alone, which I’ve been many times in the woods, running, I’ve done that. Sometimes I access that even if the people are distant from me in temporal terms or in even demographic terms, you find it through the body. Find it through the sensory information that you’re getting from the world. I wrote a book about a 1960s utopian commune called Arcadia. I’ve never lived in a commune, but I have in fact actually held a handmade blue bowl filled with warm oatmeal. I know how that feels. And so that is me right there holding that, when it’s actually the character holding that. So you find the bodily information that allows you to access the rest of the world.
Laine Perfas: Lauren, you mentioned failure. What does failure look like?
Groff: This is so funny that we’re doing this for the Harvard community because I think most people have never failed, ever. And I love failure, actually. I go after it. I have OCD, and so it’s really hard for me to do something and let it be imperfect. I’ve developed a process, which is insane, but allows me to embrace failure in the way that a child would embrace failure if they’re trying to build a LEGO castle. If it doesn’t work, they just trash it and start over again, and it’s just joyous. You’re just playing, you’re figuring things out. It’s delightful. So I’m purely analog. I only write with my pen. I don’t write on a computer until the very, very end. I don’t even read my drafts ever. I write a full draft and then I put it to the side. I start over again as many times as is necessary. Because I’m actually embracing failure. I want to know the limits of my ability to tell this story. And then it starts to grow its own ability to tell itself. Eventually if you do it enough times, then it’s not me, right? I’m not the one writing it; it’s the book itself that has come up against all of these obstacles and is teaching me how to put it down on the page. Failure in our society has so much negative baggage. But if you’re thinking about it as pure play the whole way through until the story itself starts to talk to you, until you know what you need to do, then that’s just pure joy, that’s sheer pleasure. And so I really love the fail draft. I love the first shitty thing that I don’t want anyone to ever read, because they couldn’t, because my handwriting’s egregious. So I let it all in. I let the failure in because it’s how you understand the world.
Wood: But that’s an amazing thing were you saying, Lauren. If you are at 100 pages into a manuscript, or even just 20 pages into a manuscript, you don’t sit down and read that 20 pages; you just keep going?
Laine Perfas: Oh, just a quick note for our listeners, Lauren is now holding up a notebook.
Groff: This is what I’m working on now, and I’ll never read it over again. It just doesn’t matter.
Wood: Wow. That’s amazing. I’m fascinated to ask the other two, Nick and Sam, do you work like that?
White: I have yellow legal notepads that I write on when I’m trying to just figure out the bones of the story, and then I’ll go to the computer. And I’ll type up what I imagine is going to be the first chapter, and then I’ll print that out and set it aside and look at it again. And then I’ll make corrections on that. And then I’ll open up another Word document, then I’ll go back to the yellow legal pad. So I do a mix of that, but I also find that when I’m working on a project, I carry it around with me everywhere I go. For me, writing is the stuff I do at my desk, yes, but it’s also the stuff I’m thinking about when I’m walking. My partner likes to say that I live in the clouds. And I think that is something that I struggle with: being present a lot of the time. Because I am thinking about the story and the ways in which I’ve written myself into a corner, or things that I don’t quite yet understand about a character or a situation. And it feels like there are these little knots that I’m slowly untying. I think for me, time is a great teacher. When I’m drafting in the heat of the draft, I can convince myself what I’ve written is brilliant. And then I go back the next day, and I look at it with perhaps more sober eyes, and I’m like, “Oh, this is not actually brilliant.”
Marks: I think that “shitty first draft” thing is accurate. It’s a weird, altered state where I’m writing and when I go back, I’m like, “I wrote that, oh, that’s cool.” Like, literally, I’ve forgotten what I’ve written. I do that a couple of drafts. And then once you find what it is, then more of the craft comes in, then I have to be a different kind of writer for myself: What am I trying to do in this scene? What is it really about? And then molding it. But that first phase of just going is the hardest and most exciting thing.
White: James, you mentioned about beginnings and trying to find the first sentence, and I was thinking a lot about that. I often think that revision for me in the drafting process is a way of pulling it out of myself when I’m the only reader, but start thinking about there’s going to be other readers to this and start thinking about it like a reader. And one of the things that a writing teacher once told me about beginnings that I think is so important is that when you begin a story, you also, through discourse, begin to teach the reader how to read the story. And I feel like the process of revision is like thinking about that reader constantly and who that reader is.
Wood: I love that. I was just so interested to hear from all of you because most of my writing is professionalized reviewing, this is like 4,000 words, 5,000 words at the most, and so it’s completely different. I tend to begin at the beginning and end at the end. And that means I obsessively have to get the beginning right. The first couple of paragraphs have to be the right ones. Once I’ve got that, then everything can flow, and it flows fairly quickly. But that first bit can be extremely slow.
Marks: Can I ask a question about your process, James? Let’s say you’re reading a novel. What’s the process of reading and synthesizing your experience to writing what you thought about it?
Wood: So the first reading I, like most of us, I’m sure, I read with a pen in hand and I’m putting lines under things and dog-earing pages. But I try to just suspend judgment as much as possible and let the experience of the book have its way. And then when I go back to the things that I thought were interesting… “Why did I dog-ear that? Oh, there’s that passage. I really liked that image, that word.” And then when I’m doing that, I’m assembling some kind of argument, but I think that first reading, I try to be fairly open and innocent.
White: That’s how I tell my students to read each other’s work in workshops. Read it at least twice; the first read is what I call the “honeymoon read,” where anything goes. This is just like a fun read; let the piece just have its way with you.
Laine Perfas: So what’s after the honeymoon read?
White: Well, then it’s the “seven-year itch.” That’s when you get the pin out and that thing that you first adored, you’re like, “Dear God, does he have to use a conjunctive adverb every time he makes a contrasting point?”
“The first read is what I call the ‘honeymoon read,’ where anything goes. Then it’s the ‘seven-year itch.’”
Nick White
Laine Perfas: James, I was just thinking about what makes a good story, and is it possible to be objectively good, or does the reader just bring so much of their own selves to a text? I have picked up titles that were raved about by critics or friends. Then I get two chapters in and I’m like, “Ugh, it’s just not me.” So as a professional critic, I’m wondering your take on that question.
Wood: I would say, and I bet I’m joined here by everyone else, that I’m responding to and looking for a kind of vitality, a sort of liveliness and life on the page. And in that sense, once you’ve got that, other questions like, Is this a realist novel? Is this a postmodern experimental novel? Is it broken into numbered paragraphs? All that stuff, that’s not to the point. Is it? The thing that we’re trying to feel and find is just something very alive that draws you in. I have a very strong memory of back in about 2011, standing in the kitchen at the kitchen counter and opening a package from Archipelago Books. And they had sent me just on spec, the first volume of Karl Ove Knausgård’s “My Struggle.” And I didn’t know anything about it. I’d never heard of him. I knew nothing about it, and there was just a little note saying, “Seems like you might be interested in this.” So I opened it and I just started reading it and I was standing there for three or four pages and I thought, “This has something. This has it, it’s drawing me in. It has real vitality.” So above all, I’m just looking for that thing, the non-deadness.
White: When I pick up a book to read it, I am always in the mindset where I am reading with my arms uncrossed, like, I want to be impressed. I want to be taken somewhere. I want to be drawn in. I never underestimate a book or a story’s power to make me forget everything else.
Laine Perfas: Thinking about that vitality or that life, I feel like that might be a sign that a story is working, either for you or just in general. As you are writing your own stories, how do you know whether or not it’s working?
Groff: If it comes close, you can feel it. It’s almost like a vibrational thing, you’re writing into it and it feels good, right? Even if it feels bad to write it, it feels good because you’re coming as close as possible to this thing that you want so badly to do. And that doesn’t mean that everyone who’s going to read it’s going to like it or going to feel the same vibration, but you feel it and you sense it and you know that it’s there. And it takes a very long time as a writer to actually develop a sense of that in your own work.
Marks: I just really appreciate, Lauren, how much you talk about how things feel and the vibrations and the physical nature of it. You’re talking about success at Harvard, and I think that a thing at Harvard — you’re right, most of the students are very successful. And most of the success has come at an intellectual level, and they plan their plays or their things in some ways. But, I think just returning to feeling or vibrations I just find it really useful, and I think I might use it in my class, that’s all. I’ll credit you.
Groff: Good! Sometimes we do have to detach our intellect from the thing that we’re trying to do, because it’s not all about intellect. Eventually in the editing process, that’s when we apply it. If you want to write something that’s going to affect people emotionally, you have to do it emotionally.
White: And it has to cost you more than the time you’re spending writing. It pushes me to my emotional and intellectual capabilities. When something is working, it is because all cylinders are firing and I am working at the very bleeding edge of what I am capable of.
Wood: I’d also say, as a tip — I don’t know if it’s useful for any of our listeners — first of all, I tend to just quietly read under my breath as I’m reading through a paragraph or two. I quietly read it so that I can get a sense, but I also think actually just a more vocal version of that, of actually reading something out loud. It’s interesting, isn’t it, when you read from your work in a bookshop. I don’t know if others have had this. I’ve certainly had it. You agree to read a few pages, and suddenly you find you’ve edited out a sentence. It’s like some weird kind of bullshit detector went off and you thought, “Ah, I can’t pull that one off on the room.” And there’s probably a good reason for that, right? And it should have been left out in the original book. So I think that’s the estranging thing always — you want estrangement, don’t you? Because you’re trying to split yourself into two. This other person, this colder person who is ready to murder the darlings, will ideally read what you’ve written, and that’s very difficult to do.
White: Reading aloud is so important. I work in my office here at Lamont and I’m sure that when I was back at OSU as a professor there, my colleagues who were nearby, probably thought I was insane because I’m reading my stuff aloud and it must appear that I am like in some ways having a breakdown, especially depending on what part of the story or chapter I’m reading. But that is so true.
Laine Perfas: James, you mentioned the phrase “murder your darlings.” Just in case there’s people listening who are not familiar with it, because it sounds pretty morbid, it’s this idea that you may have these beloved passages or sentences or characters even, but you have to kill them for the greater good of the story. Does it get easier to murder your darlings?
Wood: I think it does, I certainly have memories of being much younger and reacting badly to editors, possessively and somewhat neurotically storming around the house for a couple of days and saying, “This fool wants to…” And then I think just having enough of those experiences, you begin to realize that there is wisdom outside oneself.
White: Editors have saved my ass so many times. I remember in a story from my story collection, my editor, Kate Napolitano, she was so brilliant. She had a great bullshit detector. I remember there was one line in one of my stories where I said, “He wept,” and she put a little note there that said, “Who is he? Jesus?” And, I just cut it out. And I was like, oh, I don’t know. I think I like this, but she’s saying, cut it out. And I cut it out and then I go back and revisit the story. It just works so much better. Like everything she’d ever told me to cut out, I have done and followed. And it made it so much better. I totally agree that murdering all of that gets so much easier, becomes a sociopath almost for your bad lines.
Marks: Can I have a little counter to this? A gentle pushback. Obviously. Yes. Kill your darlings. Yes. Of course. Edit. And it does get easier. I think that’s true. That said, sometimes things are difficult, and people don’t like them, but it doesn’t mean you should cut them. I’m not saying you need to be willfully difficult, but that you do things even though they may not be immediately appealing. It goes back to the TV conversation. So much TV is bad because it’s always rewarding, and there’s a place for that. But there’s also a place for things that don’t reward immediately. I’m just saying that in the killing of your darlings discussion, you have to balance, do I need this? Is this bad? Who is this telling? And that’s why it makes it hard, right? Maybe I haven’t done a good enough job of writing the thing I want to write. It’s not that I should cut it, it’s that I should write it better. That’s the challenge.
Groff: That’s what good editing is, flagging those moments where there’s no life or there’s less life than you need. It’s not telling you to cut something. It’s telling you it needs to be better. Everyone needs an editor.
White: One of my greatest writing teachers, Michele Herman, she had this famous mark that she would put on your stories. They were brackets. She would bracket a sentence and she would just write out on the side, “Do better here.” And it really worked.
Laine Perfas: I am having a flashback to my early writing career and how a comment like that would’ve caused me to have a total breakdown, and I would take it so personally. But I do think with experience and perspective, you realize that there’s nothing to take personally; we all benefit from having other people weigh in on stuff.
White: We all make bad art on the way to making good art, and it’s just the way it is.
“We all make bad art on the way to making good art.”
Nick White
Laine Perfas: As we wrap up this episode, do you have any advice for other writers out there?
White: One of the things I’ve found has been very useful in talking to my students, especially, when we’re working on short stories, but more than ever when we’re working on novels, is thinking about your process and thinking about your schedule. I’m teaching a novel workshop next semester, and one of the first things that we’re going to do is talk about our schedule, making time for the practice of writing in our daily lives. And I think that has always been something that has saved me even when I’m feeling stuck, I know that every Monday from 7 in the morning to 10, I’m going to spend three hours that Monday thinking about my work, even if I don’t put a mark on the page, I’m still thinking about it. I’m still wrestling with it. I think Peter Ho Davies, who teaches at University of Michigan, talks about how you don’t have to write every day, but you still need to touch your work every day. And I think that is something that has always helped me. When I feel like I’m failing and I don’t know my way out of a particular story, my instinct, going back to what Sam said about how we deal with conflict is to run away, is to procrastinate, is to watch a really bad television show or do something that’s not writing. But that’s the time when I need to go to the work and sit with it.
Groff: Writing is more a verb than it is a noun. It’s the process. What you’re doing is focused on the process. That’s the art. And if you happen to have something at the end of it that other people can read, that’s great. That allows you to do it again.
I don’t know if anybody ever gave this advice, but it feels like someone did, so I’m just going to go with it. There really are no rules, right? If you look at the great works of literature, Leo Tolstoy does everything that he wants to do because he can, right? So it’s really only the rules inherent to the story at hand that you need to discover, and then everything else you just figure out as you’re going.
So do not be hide-bound. Do not be rule-bound. Do not be afraid. Go courageously into the work and the work will reward you.
“Do not be hide-bound. Do not be rule-bound. Do not be afraid. Go courageously into the work and the work will reward you.”
Lauren Groff
Wood: Lovely.
White: I love that.
Laine Perfas: Thank you all for joining me today.
Groff: Thanks, Sam.
White: Thank you.
Marks: Thanks, Sam.
Laine Perfas: Thanks for listening. To see a transcript of this episode or to listen to our other episodes, visit harvard.edu/thinking. If you’re a fan of this podcast and want to support our work, share it with a friend or colleague. This episode was hosted and produced by me, Samantha Laine Perfas, with additional production and editing support from Sarah Lamodi, and editing by Ryan Mulcahy, Paul Makishima, and Max Larkin. Original music and sound design by Noel Flatt. Produced by Harvard University, copyright 2025.
If you think of a single atom as a grain of sand, then a wavelength of visible light — which is a thousand times larger than the atom’s width — is comparable to an ocean wave. The light wave can dwarf an atom, missing it entirely as it passes by. This gulf in size has long made it impossible for scientists to see and resolve individual atoms using optical microscopes alone.Only recently have scientists found ways to break this “diffraction limit,” to see features that are smaller than the wavele
If you think of a single atom as a grain of sand, then a wavelength of visible light — which is a thousand times larger than the atom’s width — is comparable to an ocean wave. The light wave can dwarf an atom, missing it entirely as it passes by. This gulf in size has long made it impossible for scientists to see and resolve individual atoms using optical microscopes alone.
Only recently have scientists found ways to break this “diffraction limit,” to see features that are smaller than the wavelength of light. With new techniques known as super-resolution microscopy, scientists can see down to the scale of a single molecule.
And yet, individual atoms have still been too small for optical microscopes — which are much simpler and less expensive than super-resolution techniques — to distinguish, until now.
In an open-access paper appearing today in Nature Communications, MIT scientists present a new computational method that enables optical microscopes to resolve individual atoms and zero in on their exact locations in a crystal structure.
The team’s new “discrete grid imaging technique,” or DIGIT, is a computational imaging approach that scientists can apply to optical data to calculate the most probable location of individual atoms based on a very important clue: the material’s known atomic configuration. As long as scientists have an idea of what a material’s physical atomic layout should be, they can use this layout as a sort of map to determine where specific atoms or features must be located.
“It’s like you know there’s a seating chart,” says lead author Yuqin “Sophia” Duan, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “Previous methods could tell you what section an atom is in. But now we can take this seating chart as prior knowledge, and can pinpoint exactly which seat the atom is in.”
With DIGIT, the team can now pinpoint individual atoms with a resolution of 0.178 angstroms. (One angstrom is one-tenth of a nanometer, which is less than half the width of a single atom). The technique enables optical microscopes to localize atomic-scale features in any material that has a known atomic pattern, such as crystalline materials or certain proteins with repeating molecular chains.
The team says the method could help guide the design of quantum devices, which often require placing individual atoms precisely within a crystal. Beyond quantum technologies, DIGIT can also provide new insights into how defects and impurities shape the behavior of advanced materials — from semiconductors to superconductors.
Duan’s co-authors at MIT are Qiushi Gu, Hanfeng Wang, Yong Hu, Kevin Chen, Matthew Trusheim, and EECS Professor Dirk Englund.
Grid support
Scientists can image features smaller than a nanometer, and sometimes as small as a single atom, but not with optical microscopes. In these cases, they use transmission or scanning electron microscopes, which send high-energy beams of electrons into a sample to generate an image based on the pattern in which the electrons scatter. These electron-based methods produce highly detailed, near-atomic-scale images, but they require imaging in a vacuum and at high energies, and only work in ultrathin, synthetic, or solid-state materials. Electron-based imaging methods are too harsh for more delicate living specimens.
In contrast, optical microscopes work at lower energies, in ambient conditions, and are safe to apply to biological samples. But they cannot discern features past the diffraction limit. Essentially, a microscope is unable to see features that are smaller than half the wavelength of visible light (about 200 to 300 nanometers) that a microscope sends in to probe a sample. Atoms, then, have long eluded optical microscopes.
In 2014, however, the Nobel Prize in Chemistry was awarded to developers of a technique to overcome the diffraction limit. Super-resolution microscopy works by shining laser light on a sample at a specific frequency that is known to resonate with a feature of interest, such as a certain molecule. When that molecule resonates, it effectively announces its presence in the material. With this optical manipulation, scientists can visualize features as small as 10 nanometers, on the scale of a single molecule.
Duan and Englund looked to resolve even smaller features by combining super-resolution techniques with statistical analysis and knowledge of materials that has often been overlooked.
“One thing that gets ignored in imaging optical systems is the physical configuration of your system,” Duan says. “For example, if you want to visualize defects in a diamond system, these defects can only be at certain positions, since they have to follow the grid of the atomic diamond structure. In proteins, there are some structures that grow in an organized grid, and their location must be somewhere along that physical grid.”
The researchers suspected that if they had a reasonably accurate map of a material’s atomic structure (imagine the ball-and-stick models of molecules in a chemistry classroom), they might use such maps as a template and try out many different orientations and rotation angles to find the closest match to whatever features are initially visualized using super-resolution microscopy.
“No one has ever done this before, to include the physical constraints or system information into the resolution technique,” Duan says.
Blurriness, collapsed
To test their idea, the researchers worked with a sample of diamond — a crystal whose microstructure is well-understood and resembles an organized grid, or lattice, of repeating carbon atoms. The researchers blindly knocked out some carbon atoms in the lattice and replaced them with silicon atoms using facilities at MIT.nano. Their goal was to identify and determine the precise locations of the errant silicon atoms.
To do so, they first used established techniques of super-resolution microscopy to probe the diamond sample, using lasers set to specific wavelengths at frequencies known to resonate with the silicon atoms but not the carbon atoms. With this technique, researchers produced images that depicted the silicon atoms, but only as a uniform blur.
The team then applied DIGIT to further resolve the picture. Knowing that diamond in general has a grid-like configuration of carbon atoms, the researchers took this configuration as a map, or seating chart of sorts, and assumed that any silicon atoms that took the place of a carbon atom must sit within the grid, which has a known spacing between atoms.
“Because the silicon atoms are substituting carbon atoms in the lattice, that means they must obey some integer multiple of the atomic spacing of the crystal lattice, separating any two silicon atoms,” Englund says. “That prior knowledge makes the localization different than if you add a purely amorphous material.”
The researchers essentially simulated many possibilities of orientations and rotation angles of the diamond lattice, superimposed on the blurry image of atoms that the super-resolution microscopy technique produced.
“The trick is that, in certain materials, atoms aren’t spread out randomly — they sit on a grid inside a crystal,” Duan explains. “We used that prior knowledge to sharpen the microscope’s picture. Once we factored in that ‘atomic grid,’ the blurriness collapsed, and we could pinpoint exact positions.”
In the end, they found the technique could pinpoint the location of individual silicon atoms within the diamond lattice, with a precision of 0.178 angstroms — the sharpest resolution of any optical-based imaging technique. The team has made the DIGIT code available on GitHub for anyone to apply to their optical measurements, provided their sample of interest has a well-understood atomic structure. Then, they hope that scientists will start to see much finer and detailed features and processes using light.
“It’s a big step — it takes optical microscopes into the realm of atomic scale, something people thought only electron microscopes or X-rays could do,” Duan says. “That opens up a whole new way of studying materials and biology.”
MIT physicists have developed discrete grid imaging technique (DIGIT), an optical super-resolution technique that maps quantum emitters to lattice sites with atomic localization precision (as represented in this artist’s interpretation).
New York City’s mostly indoor cats easily caught SARS-CoV-2 during the first wave of the COVID-19 epidemic – and most were asymptomatic and were likely infected by their owners.
New York City’s mostly indoor cats easily caught SARS-CoV-2 during the first wave of the COVID-19 epidemic – and most were asymptomatic and were likely infected by their owners.
The CSA delivers independent and impartial science and engineering advice to UK ministers and policymakers across the DESNZ policy and delivery portfolio and Clean Energy Superpower Mission. The CSA is also responsible for ensuring the department has robust systems in place to access science and engineering expertise, including as departmental Head of the Government Science and Engineering Profession.
“It's a great honour to join the Department for Energy Security and Net Zero as Chief Scientif
The CSA delivers independent and impartial science and engineering advice to UK ministers and policymakers across the DESNZ policy and delivery portfolio and Clean Energy Superpower Mission. The CSA is also responsible for ensuring the department has robust systems in place to access science and engineering expertise, including as departmental Head of the Government Science and Engineering Profession.
“It's a great honour to join the Department for Energy Security and Net Zero as Chief Scientific Adviser at a time when scientific evidence is so crucial to informing the UK’s response to the twin challenges of climate change and energy security,” Emily said.
Emily will make a phased transition from her role as Director of Cambridge Zero, the University of Cambridge's major climate change initiative, while retaining her academic research activities. She will begin her DESNZ role by working one day a week from 3 November before moving to three days from January and four days from Summer 2026. The secondment will be for three years.
“I warmly congratulate Professor Shuckburgh on her appointment as Chief Scientific Adviser to the Department for Energy Security and Net Zero. It is an appointment that recognises not only her outstanding academic leadership in tackling the climate and biodiversity crises, but also the vital role that Cambridge plays in shaping Britain’s future,” University of Cambridge Vice-Chancellor Deborah Prentice said.
Emily was awarded a CBE in the 2025 Birthday Honours for HM King Charles III for the Public Communication of Climate Science and appointed alongside two other Cambridge academics to the DESNZ Science and Technology Advisory Council (STAC) in July. She has acted as an adviser on climate to the UK Government in various capacities, including as a Friend of COP26. Before founding Cambridge Zero in 2019, Emily worked for more than a decade at the British Antarctic Survey where her work included leading a UK national research programme on the Southern Ocean and its role in climate.
“The role of the CSA is so critical to our work and our Mission, therefore I’m delighted that Professor Shuckburgh is joining the Department. She brings incredible experience as a world leading climate scientist, and I know she’ll add considerable value to the work of our Department,” DESNZ Permanent Secretary Jeremy Pocklington said.
Emily was awarded an OBE in 2016 and is co-author with HM King Charles III and British environmentalist Tony Juniper of A Ladybird Book on Climate Change. She is a mathematician and climate scientist, a Fellow of Darwin College and an alumna of Trinity College, Cambridge. She is President-elect of the Royal Meteorological Society, a Fellow of the Cambridge Institute for Sustainability Leadership (CISL), a Fellow of the British Antarctic Survey, a Fellow of the Royal Geographical Society, and an Honorary Fellow of the Energy Institute.
At Cambridge, Emily is Professor of Environmental Data Science at the Department of Computer Science and Technology (CST), Academic Director of the Institute of Computing for Climate Science (ICCS), and co-Director of the Centre for Landscape Regeneration (CLR) and also of the UKRI Centre for Doctoral Training on the Application of AI to the study of Environmental Risks (AI4ER).
The UK Department for Energy Security and Net Zero (DESNZ) has appointed Cambridge Zero Director Professor Emily Shuckburgh CBE as the Department’s new Chief Scientific Adviser (CSA).
By Mr Nicholas Thomas, Research Fellow at the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUSThe Straits Times, 13 October 2025, Opinion, pB2
Do you have a memory so vivid you can relive it as if it's happening all over again, re-experiencing the physical sensations and emotions just as you did in that moment?
Researchers at the Universities of Cambridge and Durham want to understand more about vivid memories: how these experiences differ from person to person, how they evolve as we age, and how they changed across modern history. To do it, they need your help.
The team has launched an online public survey asking people to describe
Do you have a memory so vivid you can relive it as if it's happening all over again, re-experiencing the physical sensations and emotions just as you did in that moment?
Researchers at the Universities of Cambridge and Durham want to understand more about vivid memories: how these experiences differ from person to person, how they evolve as we age, and how they changed across modern history. To do it, they need your help.
The team has launched an online public survey asking people to describe two of their most vivid memories. They’re hoping for thousands of responses from people of all age-groups and walks of life, to help them build an anonymised database representative of the whole population.
The findings will inform new ways to help people remember things in more vivid detail. They will also help researchers to understand the nature of human memories across the lifespan, and how ideas about memory have evolved over centuries.
While the modern scientific definition of vivid memory tends to emphasise visual detail, the team is taking a novel approach by drawing on Shakespeare’s texts and historical diaries for a richer definition, encompassing many additional sensations.
Dr Kasia Mojescik, a researcher in the Department of Psychology at the University of Cambridge who is involved in the project, said: “For the first time, cognitive neuroscientists are working directly with humanities scholars to design experiments that try to understand vivid memories from an entirely new perspective.”
Professor Charles Fernyhough in the Department of Psychology at Durham University, and a member of the project team, said: “By exploring historical and literary perspectives on memory, we’re including many aspects of the experience of remembering - such as strong emotions, and the feeling of being present in the moment - that have been neglected in purely scientific studies.”
Using machine learning tools, the team will look for recurring patterns in the experiences that are remembered with the greatest detail throughout our lives.
Trends that emerge across age groups might explain why, even as we feel our memories are becoming less precise as we age, our most precious or identity-shaping memories often remain as vivid as if they happened yesterday.
Dr Martha McGill in the Faculty of English at the University of Cambridge, a member of the project team, will reflect on how the experience of remembering has changed over time, looking at British autobiographical writings from the sixteenth to the eighteenth centuries.
Professor Jon Simons in the Department of Psychology at the University of Cambridge, and project lead, said: “Many people have at least one really vivid memory. For me it’s the birth of my first child. It’s not something that I just know happened – it’s an event I can go back and relive in incredible detail, like mental time travel.”
The team hopes that the findings might also inform future pharmaceutical treatments and therapeutic interventions for memory problems.
More information on the research project, When Memories Come Alive, isavailable here.
This research is funded by UK Research and Innovation (UKRI)’s pilot scheme for interdisciplinary research: the Cross Research Council Responsive Mode scheme (CRCRM).
Researchers have launched a public survey to help them unlock the secrets of vivid memory, and find ways to help us better recall past experiences
For the first time, cognitive neuroscientists are working directly with humanities scholars to design experiments that try to understand vivid memories from an entirely new perspective.
Diabetes leads to nerve damage in half of all people affected, starting in the feet. The smart sock from ETH spin-off MYNERVA helps sufferers feel the ground again when walking and alleviates their chronic pain.
Diabetes leads to nerve damage in half of all people affected, starting in the feet. The smart sock from ETH spin-off MYNERVA helps sufferers feel the ground again when walking and alleviates their chronic pain.
The degree to which someone trusts the information depicted in a chart can depend on their assumptions about who made the data visualization, according to a pair of studies by MIT researchers.For instance, if someone infers that a graph about a controversial topic like gun violence was produced by an organization they feel is in opposition with their beliefs or political views, they may discredit the information or dismiss the visualization all together.The researchers found that even the cleare
The degree to which someone trusts the information depicted in a chart can depend on their assumptions about who made the data visualization, according to a pair of studies by MIT researchers.
For instance, if someone infers that a graph about a controversial topic like gun violence was produced by an organization they feel is in opposition with their beliefs or political views, they may discredit the information or dismiss the visualization all together.
The researchers found that even the clearest visualizations often communicate more than the data they explicitly depict, and can elicit strong judgments from viewers about the social contexts, identities, and characteristics of those who made the chart.
Readers make these assessments about the social context of a visualization primarily from its design features, like the color palette or the way information is arranged, rather than the underlying data. Often, these inferences are unintended by the designers.
Qualitative and quantitative studies revealed that these social inferences aren’t restricted to certain subgroups, nor are they caused by limited data literacy.
The researchers consolidate their findings into a framework that scientists and communicators can use to think critically about how design choices might affect these social assumptions. Ultimately, they hope this work leads to better strategies for scientific communication.
“If you are scrolling through social media and you see a chart, and you immediately dismiss it as something an influencer has produced just to get attention, that shapes your entire experience with the chart before you even dig into the data. We’ve shown in these papers that visualizations do more than just communicate the data they are depicting — they also communicate other social signals,” says Arvind Satyanarayan, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-senior author of this research.
He is joined on the paper by co-lead authors Amy Rae Fox, a former CSAIL postdoc, and Michelle Morgenstern, a current postdoc in MIT’s anthropology program; and co-senior author Graham M. Jones, professor of anthropology. Two related papers on this research will be presented at the IEEE Visualization Conference.
Charts as social artifacts
During the height of the Covid-19 pandemic, social media was awash in charts from organizations like the World Health Organization and Centers for Disease Control and Prevention, which were designed to convey information about the spread of disease.
The MIT researchers studied how these visualizations were being used to discuss the pandemic. They found that some citizen scientists were using the underlying data to make visualizations of their own, challenging the findings of mainstream science.
“This was an unexpected discovery as, previously, citizen scientists were typically aligned with mainstream scientists. It took us a few years to figure out how to study this phenomenon more deeply,” Satyanarayan says.
Most research into data visualization studies how charts communicate data. Instead, the researchers wanted to explore visualizations from a social and linguistic perspective to assess the information they convey beyond the data.
Linguistic anthropologists have found that, while language allows people to communicate ideas, it also holds social meaning beyond the words people use. For instance, an accent or dialect can indicate that someone is part of a particular community.
By “pointing” to certain social meanings, identities, and characteristics, language serves what is known as a socio-indexical function.
“We wanted to see if things in the visual language of data communication might point to certain institutions, or the kinds of people in those institutions, that carry a meaning that could be unintended by the makers of the visualization,” Jones says.
To do this, the researchers conducted an initial, qualitative study of users on the social media platform Tumblr. During one-on-one interviews, the researchers showed users a variety of real visualizations from online sources, as well as modified visualizations where they removed the textual information, like titles and axes labels.
Stripping out the textual information was particularly important, since it mimics the way people often interact with online visualizations.
“Our engagement with social media is a few quick seconds. People aren’t taking the time to read the title of a chart or look at the data very carefully,” Satyanarayan says.
The interviews revealed that users made detailed inferences about the people or organizations who created the visualizations based on what they called “vibes,” design elements, like colors or the use of certain graphics. These inferences in turn impacted their trust in the data.
For instance, after seeing a chart with the flags of Georgia and Texas and a graph with two lines in red and black, but no text, one user said, “This kind of looks like something a Texas Republican (legislator) would put on Twitter or on their website, or as part of a campaign presentation.”
A quantitative approach
Building on this initial work, the researchers used the same methodology in three quantitative studies involving surveys sent to larger groups of people from a variety of backgrounds.
They found the same phenomenon: People make inferences about the social context of a visualization based on its design, which can lead to misunderstandings about, and mistrust in, the data it depicts.
For instance, users felt some visualizations were so neatly arranged they believed them to be advertisements, and therefore not trustworthy. In another example, one user dismissed a chart by a Pulitzer-prize winning designer because they felt the hand-drawn graphical style indicated it was made by “some female Instagram influencer who is just trying to look for attention.”
“If that is the first reaction someone has to a chart, it is going to massively impact the degree to which they trust it,” Satyanarayan says.
Moreover, when the researchers reintroduced text in the visualizations from which it had been removed, users still made these social inferences.
Typically, in data visualization, the solution to such a problem would be to create clearer charts or educate people about data literacy. But this research points to a completely different kind of data literacy, Jones says.
“It is not erroneous for people to be drawing these inferences. It requires a lot of cultural knowledge about where visualizations come from, how they are made, and how they circulate. Drawing these inferences is a feature, not a bug, of the way we use signs,” he says.
From these results, they created a classification framework to organize the social inferences users made and the design elements that contributed to them. They hope the typology serves as a tool designers can use to develop more effective visualizations, as well as a starting point for additional studies.
Moving forward, the researchers want to continue exploring the role of data visualizations as social artifacts, perhaps by drilling down on each design feature they identified in the typology. They also want to expand the scope of their study to include visualizations in research papers and scientific journals.
“Part of the value of this work is a methodological contribution to render a set of phenomena amenable to experimental study. But this work is also important because it showcases an interdisciplinary cross-pollination that is powerful and unique to MIT,” Jones says.
This work was supported, in part, by MIT METEOR and PFPFEE fellowships, an Amar G. Bose Fellowship, an Alfred P. Sloan Fellowship, and the National Science Foundation.
“We’ve shown in these papers that visualizations do more than just communicate the data they are depicting — they also communicate other social signals,” says Arvind Satyanarayan.
Professor Rajasekhar Balasubramanian and Assistant Professor He Xiaogang from the Department of Civil and Environmental Engineering under the College of Design and Engineering at NUS, have earned recognition in the American Geophysical Union (AGU) 2025 Honours Programme for their distinguished achievements.AGU honours individuals and teams for their outstanding contributions to research, education, science communication and outreach. These recipients have transformed our understanding of the wor
AGU honours individuals and teams for their outstanding contributions to research, education, science communication and outreach. These recipients have transformed our understanding of the world, improved daily lives and communities, and helped drive solutions towards a more sustainable future.
AGU Fellow
Prof Balasubramanian has been elected a Fellow of AGU, one of the Union’s most prestigious honours, in recognition of his distinguished and sustained contributions to the field of air quality.
The Fellows programme recognises individuals who have demonstrated scientific excellence in the Earth and space sciences through notable achievements in research, such as making a breakthrough or discovery, driving innovation in their field, or achieving sustained scientific impact.
Natural Hazards Early Career Award
Asst Prof He, whose research addresses one of humanity’s most pressing challenges in the face of climate change by studying and managing the increasing risks from droughts and floods, and their cascading impacts on global water, food, and energy security, has been conferred the Natural Hazards Early Career Award.
This award is presented annually to recognise significant contributions to natural hazards science by an early career scientist within 10 years of earning PhD.
By Dr Chew Han Ei, Senior Research Fellow and Head of Governance and Economy, from the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUS; and Assoc Prof Vincent Chua from the Dept of Sociology and Anthropology, Faculty of Arts and Social Sciences at NUSThe Business Times, 14 October 2025, p23
By Dr Chew Han Ei, Senior Research Fellow and Head of Governance and Economy, from the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUS; and Assoc Prof Vincent Chua from the Dept of Sociology and Anthropology, Faculty of Arts and Social Sciences at NUS
By Assoc Prof Liu Nan from the Centre for Quantitative Medicine and Director of the Duke-NUS AI + Medical Sciences Initiative at Duke-NUS Medical School; and Ms Jasmine Ong from SingHealthThe Straits Times, 14 October 2025, Opinion, pB3
By Assoc Prof Liu Nan from the Centre for Quantitative Medicine and Director of the Duke-NUS AI + Medical Sciences Initiative at Duke-NUS Medical School; and Ms Jasmine Ong from SingHealth
The team, led by researchers at the University of Cambridge, analysed more than 250 young galaxies that existed when the universe was between 800 million and 1.5 billion years old. By studying the movement of gas within these galaxies, the researchers discovered that most were turbulent, ‘clumpy’ systems that had not yet settled into smooth rotating disks like our own Milky Way.
Their findings, published in the Monthly Notices of the Royal Astronomical Society, suggest that galaxies gradually b
The team, led by researchers at the University of Cambridge, analysed more than 250 young galaxies that existed when the universe was between 800 million and 1.5 billion years old. By studying the movement of gas within these galaxies, the researchers discovered that most were turbulent, ‘clumpy’ systems that had not yet settled into smooth rotating disks like our own Milky Way.
Their findings, published in the Monthly Notices of the Royal Astronomical Society, suggest that galaxies gradually became calmer and more ordered as the universe evolved. But in the early universe, star formation and gravitational instabilities stirred up so much turbulence that many galaxies struggled to settle.
“We don’t just see a few spectacular outliers – this is the first time we’ve been able to look at an entire population at once,” said first author Lola Danhaive from Cambridge’s Kavli Institute for Cosmology. “We found huge variation: some galaxies are beginning to settle into ordered rotation, but most are still chaotic, with gas puffed up and moving in all directions.”
The researchers used JWST’s NIRCam instrument in a rarely used ‘grism mode’ that captures faint light from ionised hydrogen gas in distant galaxies. Danhaive wrote new code to unravel the grism data, matching it with images from other JWST surveys to measure how gas was moving inside each galaxy.
“Previous results suggested massive, well-ordered disks forming very early on, which didn’t fit our models,” said co-author Dr Sandro Tacchella from the Kavli Institute and the Cavendish Laboratory. “But by looking at hundreds of galaxies with lower stellar masses instead of just one or two, we see the bigger picture, and it’s much more in line with theory. Early galaxies were more turbulent, less stable, and grew up through frequent mergers and bursts of star formation.”
“This work helps bridge the gap between the epoch of reionisation and the so-called cosmic noon, when star formation peaked,” said Danhaive, who is also affiliated with the Cavendish Laboratory. “It shows how the building blocks of galaxies gradually transitioned from chaotic clumps into ordered structures, and how galaxies such as the Milky Way formed.”
The results show how JWST allows scientists to probe galaxy dynamics at a scale that was impossible before. Future studies will aim to combine these findings with observations of cold gas and dust to paint a fuller picture of how the earliest galaxies took shape.
“This is just the beginning,” said Tacchella. “With more data, we’ll be able to track how these turbulent systems grew up and became the graceful spirals we see today.”
The research was supported in part by the Royal Society, the European Union, and the Science and Technology Facilities Council (STFC), part of UK Research and Innovation (UKRI). JWST is an international partnership between NASA, ESA and the Canadian Space Agency (CSA). The data for this result were captured as part of the JWST Advanced Deep Extragalactic Survey (JADES). Sandro Tacchella is a Fellow of St Edmund’s College, Cambridge. Lola Danhaive is a PhD student in the Centre for Doctoral Training (CDT) in Data Intensive Science.
Astronomers using the James Webb Space Telescope (JWST) have captured the most detailed look yet at how galaxies formed just a few hundred million years after the Big Bang – and found they were far more chaotic and messy than those we see today.
The pioneering procedure, which uses ultrasound energy to destroy tumours, took place at Addenbrooke’s Hospital, where Roger Jackson from Bedford underwent the incisionless treatment for liver cancer.
The installation of the Edison Histotripsy System at Addenbrooke’s was made possible by a generous donation to the University of Cambridge from the Li Ka Shing Foundation, a long-standing supporter of cancer research in Cambridge. The technology, developed by US-based HistoSonics, has already trea
The pioneering procedure, which uses ultrasound energy to destroy tumours, took place at Addenbrooke’s Hospital, where Roger Jackson from Bedford underwent the incisionless treatment for liver cancer.
The installation of the Edison Histotripsy System at Addenbrooke’s was made possible by a generous donation to the University of Cambridge from the Li Ka Shing Foundation, a long-standing supporter of cancer research in Cambridge. The technology, developed by US-based HistoSonics, has already treated over 2,000 patients worldwide following the Food and Drug Administration (FDA) clearance for the destruction of liver tumours in 2023.
Histotripsy uses focused sound waves to generate microscopic “bubble clouds” from naturally occurring gases present in targeted tumour tissues. The bubbles form and collapse in microseconds, creating mechanical forces that destroy cancer cells without the need for surgery, radiation, or chemotherapy. With treatment taking as little as 30 minutes and usually with minimal or no pain, patients can recover quickly and spend less time in hospital, with treatment performed as a day case.
Dr Teik Choon See, consultant interventional radiologist at Cambridge University Hospitals NHS Foundation Trust (CUH), led the procedure. He said: “Histotripsy represents a major and exciting step forward in cancer treatment. It allows us to target tumours more precisely while sparing surrounding healthy tissue, offering patients a safer and faster alternative to traditional therapies.
“What is even more promising is in some reported cases, after the sound waves break apart the tumour, the patient’s immune response may become activated and clear up some remaining cancerous tissues, showing real hope for patients.”
“An amazing experience”
Roger Jackson, 80, said: “I feel privileged to be the first NHS patient and to receive this care was an amazing experience. It is impressive to think that sound waves can treat cancer, without the need for patients like me to go through intensive surgery, at what already is a stressful time. I’m hugely grateful to the team at Addenbrooke’s for their specialist care and expertise.”
After treatment last week, Mr Jackson was discharged the following day and is back at home. He said he is now looking forward to spending time with his family, including his sons, grandchildren and great-grandchildren.
Roger Jackson’s treatment is the first histotripsy procedure to take place after the equipment was granted Unmet Clinical Need Authorisation in Great Britain enabling time-limited, controlled early access to the Histotripsy device under the UK’s Innovative Devices Access Pathway pilot programme. Overseen by the Medicines and Healthcare products Regulatory Agency (MHRA), this enables early market access to the medical device under certain conditions prior to full regulatory approval, meaning NHS patients can benefit from technology years earlier than planned.
With preliminary funding from Addenbrooke’s Charitable Trust (ACT), treatment is initially being offered to selected patients with tumours from primary and secondary liver cancers. The National Institute for Health and Care Research (NIHR) is exploring initiatives to fund research into the clinical and cost-effectiveness of histotripsy. Further studies are underway to explore its use in other cancer types.
“The beginning of a new generation in cancer treatment”
Health and Social Care Secretary Wes Streeting said: “This marks the beginning of a new generation in cancer treatment. We are lighting the fuse beneath the technological revolution, transforming care for NHS patients.
“By slashing red tape, we’ve made sure this game-changing new cancer treatment has reached the NHS front line quicker, and I'm proud to say British patients are now the first in Europe to benefit.
“This government has streamlined approval processes to create an NHS fit for the future - protecting patients while unleashing the full potential of our scientists and NHS staff so they can deliver world-class care.”
Roland Sinker, chief executive of CUH, said: “Histotripsy represents a hugely exciting and new era of cancer innovation and care.
“With faster recovery times and shorter hospital stays, this not only reduces the strain on our hospital beds, but it also frees up surgeons to focus on the more complex cancer cases, helping to cut waiting times.
“We are delighted to be at the forefront of this new ground-breaking technology and understanding how we can treat cancer more accurately and precisely, a position we aim to strengthen further with our planned Cambridge Cancer Research Hospital.”
Cambridge Cancer Research Hospital, set to be built on Europe’s largest life science campus, the Cambridge Biomedical Campus, is a partnership between CUH and the University of Cambridge. By bringing world-leading scientists and clinical expertise together in one NHS building, the new hospital will treat patients across the East of England and will accelerate research and innovations to change the story of cancer across the UK and beyond.
Lawrence Tallon, Chief Executive of the MHRA, said: “This milestone shows how smart, agile regulation can help bring promising new treatments to patients sooner. Through the Innovative Devices Access Pathway, we at the MHRA have worked with partners across the health system to safely make early access to this technology possible.
“My congratulations to the team at Cambridge University Hospitals on this breakthrough – their work demonstrates how collaboration can unlock innovation for patients and deliver faster access to care.”
Treatment for Histotripsy
Addenbrooke’s is currently setting up a referral pathway, so the histotripsy technology can be made available to patients at Addenbrooke’s and beyond. External referrals will be considered through a consultant referral, and suitability for the treatment will be decided by medical teams based on the cancer location, size, extent and overall patient’s fitness.
No other provider is offering histotripsy in the UK at the moment.
Patients should speak to their consultant if they have any questions about being referred for treatment. If you already have a referral, and have further questions, please email the Cambridge team.
Adapted from a press release from CUH
A Cambridge patient has become the first person in Europe to receive cutting-edge histotripsy treatment outside of a clinical trial, after the technology was fast-tracked by the Government - marking a major milestone in NHS cancer care.
Histotripsy represents a major and exciting step forward in cancer treatment
Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.Understanding how that happens is i
Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.
Understanding how that happens is important, not only for advancing fundamental knowledge about how nervous systems develop, but also because many disorders such as epilepsy, autism, or intellectual disability can arise from aberrations of synaptic transmission, says senior author Troy Littleton, the Menicon Professor in The Picower Institute and MIT’s Department of Biology. The new findings, funded in part by a 2021 grant from the National Institutes of Health, provide insights into how active zones develop the ability to send neurotransmitters across synapses to their circuit targets. It’s not instant or predestined, the study shows. It can take days to fully mature, and that is regulated by neural activity.
If scientists can fully understand the process, Littleton says, then they can develop molecular strategies to intervene to tweak synaptic transmission when it’s happening too much or too little in disease.
“We’d like to have the levers to push to make synapses stronger or weaker, that’s for sure,” Littleton says. “And so knowing the full range of levers we can tug on to potentially change output would be exciting.”
In the study, the researchers examined neurons that send the neurotransmitter glutamate across synapses to control muscles in the fly larvae. To study how the active zones in the animals matured, the scientists needed to keep track of their age. That hasn’t been possible before, but Akbergenova overcame the barrier by cleverly engineering the fluorescent protein mMaple, which changes its glow from green to red when zapped with 15 seconds of ultraviolet light, into a component of the glutamate receptors on the receiving side of the synapse. Then, whenever she wanted, she could shine light and all the synapses already formed before that time would glow red, and any new ones that formed subsequently would glow green.
With the ability to track each active zone’s birthday, the authors could then document how active zones developed their ability to increase output over the course of days after birth. The researchers actually watched as synapses were built over many hours by tagging each of eight kinds of proteins that make up an active zone. At first, the active zones couldn’t transmit anything. Then, as some essential early proteins accumulated, they could send out glutamate spontaneously, but not if evoked by electrical stimulation of their host neuron (simulating how that neuron might be signaled naturally in a circuit). Only after several more proteins arrived did active zones possess the mature structure for calcium ions to trigger the fusion of glutamate vesicles to the cell membrane for evoked release across the synapse.
Activity matters
Of course, construction does not go on forever. At some point, the fly larva stops building one synapse and then builds new ones further down the line as the neuronal axon expands to keep up with growing muscles. The researchers wondered whether neural activity had a role in driving that process of finishing up one active zone and moving on to build the next.
To find out, they employed two different interventions to block active zones from being able to release glutamate, thereby preventing synaptic activity. Notably, one of the methods they chose was blocking the action of a protein called Synaptotagmin 1. That’s important because mutations that disrupt the protein in humans are associated with severe intellectual disability and autism. Moreover, the researchers tailored the activity-blocking interventions to just one neuron in each larva because blocking activity in all their neurons would have proved lethal.
In neurons where the researchers blocked activity, they observed two consequences: the neurons stopped building new active zones and instead kept making existing active zones larger and larger. It was as if the neuron could tell the active zone wasn’t releasing glutamate and tried to make it work by giving it more protein material to work with. That effort came at the expense of starting construction on new active zones.
“I think that what it’s trying to do is compensate for the loss of activity,” Littleton says.
Testing indicated that the enlarged active zones the neurons built in hopes of restarting activity were functional (or would have been if the researchers weren’t artificially blocking them). This suggested that the way the neuron sensed that glutamate wasn’t being released was therefore likely to be a feedback signal from the muscle side of the synapse. To test that, the scientists knocked out a glutamate receptor component in the muscle, and when they did, they found that the neurons no longer made their active zones larger.
Littleton says the lab is already looking into the new questions the discoveries raise. In particular: What are the molecular pathways that initiate synapse formation in the first place, and what are the signals that tell an active zone it has finished growing? Finding those answers will bring researchers closer to understanding how to intervene when synaptic active zones aren’t developing properly.
In addition to Littleton and Akbergenova, the paper’s other authors are Jessica Matthias and Sofya Makeyeva.
In addition to the National Institutes of Health, The Freedom Together Foundation provided funding for the study.
Researchers studied the importance of neurons’ activity on the development of their circuit connections, called synapses. These images show how synapses formed between a neuron (above) and muscle (below) a narrow cleft. The distinct T-bar shape indicates an active zone. In the upper image, the synapse developed normally. In the bottom image, researchers disrupted activity of the synapse. As a result, the active zone grew much larger.
Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.Understanding how that happens is i
Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.
Understanding how that happens is important, not only for advancing fundamental knowledge about how nervous systems develop, but also because many disorders such as epilepsy, autism, or intellectual disability can arise from aberrations of synaptic transmission, says senior author Troy Littleton, the Menicon Professor in The Picower Institute and MIT’s Department of Biology. The new findings, funded in part by a 2021 grant from the National Institutes of Health, provide insights into how active zones develop the ability to send neurotransmitters across synapses to their circuit targets. It’s not instant or predestined, the study shows. It can take days to fully mature, and that is regulated by neural activity.
If scientists can fully understand the process, Littleton says, then they can develop molecular strategies to intervene to tweak synaptic transmission when it’s happening too much or too little in disease.
“We’d like to have the levers to push to make synapses stronger or weaker, that’s for sure,” Littleton says. “And so knowing the full range of levers we can tug on to potentially change output would be exciting.”
In the study, the researchers examined neurons that send the neurotransmitter glutamate across synapses to control muscles in the fly larvae. To study how the active zones in the animals matured, the scientists needed to keep track of their age. That hasn’t been possible before, but Akbergenova overcame the barrier by cleverly engineering the fluorescent protein mMaple, which changes its glow from green to red when zapped with 15 seconds of ultraviolet light, into a component of the glutamate receptors on the receiving side of the synapse. Then, whenever she wanted, she could shine light and all the synapses already formed before that time would glow red, and any new ones that formed subsequently would glow green.
With the ability to track each active zone’s birthday, the authors could then document how active zones developed their ability to increase output over the course of days after birth. The researchers actually watched as synapses were built over many hours by tagging each of eight kinds of proteins that make up an active zone. At first, the active zones couldn’t transmit anything. Then, as some essential early proteins accumulated, they could send out glutamate spontaneously, but not if evoked by electrical stimulation of their host neuron (simulating how that neuron might be signaled naturally in a circuit). Only after several more proteins arrived did active zones possess the mature structure for calcium ions to trigger the fusion of glutamate vesicles to the cell membrane for evoked release across the synapse.
Activity matters
Of course, construction does not go on forever. At some point, the fly larva stops building one synapse and then builds new ones further down the line as the neuronal axon expands to keep up with growing muscles. The researchers wondered whether neural activity had a role in driving that process of finishing up one active zone and moving on to build the next.
To find out, they employed two different interventions to block active zones from being able to release glutamate, thereby preventing synaptic activity. Notably, one of the methods they chose was blocking the action of a protein called Synaptotagmin 1. That’s important because mutations that disrupt the protein in humans are associated with severe intellectual disability and autism. Moreover, the researchers tailored the activity-blocking interventions to just one neuron in each larva because blocking activity in all their neurons would have proved lethal.
In neurons where the researchers blocked activity, they observed two consequences: the neurons stopped building new active zones and instead kept making existing active zones larger and larger. It was as if the neuron could tell the active zone wasn’t releasing glutamate and tried to make it work by giving it more protein material to work with. That effort came at the expense of starting construction on new active zones.
“I think that what it’s trying to do is compensate for the loss of activity,” Littleton says.
Testing indicated that the enlarged active zones the neurons built in hopes of restarting activity were functional (or would have been if the researchers weren’t artificially blocking them). This suggested that the way the neuron sensed that glutamate wasn’t being released was therefore likely to be a feedback signal from the muscle side of the synapse. To test that, the scientists knocked out a glutamate receptor component in the muscle, and when they did, they found that the neurons no longer made their active zones larger.
Littleton says the lab is already looking into the new questions the discoveries raise. In particular: What are the molecular pathways that initiate synapse formation in the first place, and what are the signals that tell an active zone it has finished growing? Finding those answers will bring researchers closer to understanding how to intervene when synaptic active zones aren’t developing properly.
In addition to Littleton and Akbergenova, the paper’s other authors are Jessica Matthias and Sofya Makeyeva.
In addition to the National Institutes of Health, The Freedom Together Foundation provided funding for the study.
Researchers studied the importance of neurons’ activity on the development of their circuit connections, called synapses. These images show how synapses formed between a neuron (above) and muscle (below) a narrow cleft. The distinct T-bar shape indicates an active zone. In the upper image, the synapse developed normally. In the bottom image, researchers disrupted activity of the synapse. As a result, the active zone grew much larger.
When it comes to artificial intelligence, MIT and IBM were there at the beginning: laying foundational work and creating some of the first programs — AI predecessors — and theorizing how machine “intelligence” might come to be.Today, collaborations like the MIT-IBM Watson AI Lab, which launched eight years ago, are continuing to deliver expertise for the promise of tomorrow’s AI technology. This is critical for industries and the labor force that stand to benefit, particularly in the short term:
When it comes to artificial intelligence, MIT and IBM were there at the beginning: laying foundational work and creating some of the first programs — AI predecessors — and theorizing how machine “intelligence” might come to be.
Today, collaborations like the MIT-IBM Watson AI Lab, which launched eight years ago, are continuing to deliver expertise for the promise of tomorrow’s AI technology. This is critical for industries and the labor force that stand to benefit, particularly in the short term: from $3-4 trillion of forecast global economic benefits and 80 percent productivity gains for knowledge workers and creative tasks, to significant incorporations of generative AI into business processes (80 percent) and software applications (70 percent) in the next three years.
While industry has seen a boom in notable models, chiefly in the past year, academia continues to drive the innovation, contributing most of the highly cited research. At the MIT-IBM Watson AI Lab, success takes the form of 54 patent disclosures, an excess of 128,000 citations with an h-index of 162, and more than 50 industry-driven use cases. Some of the lab’s many achievements include improved stent placement with AI imaging techniques, slashing computational overhead, shrinking models while maintaining performance, and modeling of interatomic potential for silicate chemistry.
“The lab is uniquely positioned to identify the ‘right’ problems to solve, setting us apart from other entities,” says Aude Oliva, lab MIT director and director of strategic industry engagement in the MIT Schwarzman College of Computing. “Further, the experience our students gain from working on these challenges for enterprise AI translates to their competitiveness in the job market and the promotion of a competitive industry.”
“The MIT-IBM Watson AI Lab has had tremendous impact by bringing together a rich set of collaborations between IBM and MIT’s researchers and students,” says Provost Anantha Chandrakasan, who is the lab’s MIT co-chair and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “By supporting cross-cutting research at the intersection of AI and many other disciplines, the lab is advancing foundational work and accelerating the development of transformative solutions for our nation and the world.”
Long-horizon work
As AI continues to garner interest, many organizations struggle to channel the technology into meaningful outcomes. A 2024 Gartner study finds that, “at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025,” demonstrating ambition and widespread hunger for AI, but a lack of knowledge for how to develop and apply it to create immediate value.
Here, the lab shines, bridging research and deployment. The majority of the lab’s current-year research portfolio is aligned to use and develop new features, capacities, or products for IBM, the lab’s corporate members, or real-world applications. The last of these comprise large language models, AI hardware, and foundation models, including multi-modal, bio-medical, and geo-spatial ones. Inquiry-driven students and interns are invaluable in this pursuit, offering enthusiasm and new perspectives while accumulating domain knowledge to help derive and engineer advancements in the field, as well as opening up new frontiers for exploration with AI as a tool.
Findings from the AAAI 2025 Presidential panel on the Future of AI Research support the need for contributions from academia-industry collaborations like the lab in the AI arena: “Academics have a role to play in providing independent advice and interpretations of these results [from industry] and their consequences. The private sector focuses more on the short term, and universities and society more on a longer-term perspective.”
Bringing these strengths together, along with the push for open sourcing and open science, can spark innovation that neither could achieve alone. History shows that embracing these principles, and sharing code and making research accessible, has long-term benefits for both the sector and society. In line with IBM and MIT’s missions, the lab contributes technologies, findings, governance, and standards to the public sphere through this collaboration, thereby enhancing transparency, accelerating reproducibility, and ensuring trustworthy advances.
The lab was created to merge MIT’s deep research expertise with IBM’s industrial R&D capacity, aiming for breakthroughs in core AI methods and hardware, as well as new applications in areas like health care, chemistry, finance, cybersecurity, and robust planning and decision-making for business.
Bigger isn't always better
Today, large foundation models are giving way to smaller, more task-specific models yielding better performance. Contributions from lab members like Song Han, associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), and IBM Research’s Chuang Gan help make this possible, through work such as once-for-all and AWQ. Innovations such as these improve efficiency with better architectures, algorithm shrinking, and activation-aware weight quantization, letting models like language processing run on edge devices at faster speeds and reduced latency.
Consequently, foundation, vision, multimodal, and large language models have seen benefits, allowing for the lab research groups of Oliva, MIT EECS Associate Professor Yoon Kim, and IBM Research members Rameswar Panda, Yang Zhang, and Rogerio Feris to build on the work. This includes techniques to imbue models with external knowledge and the development of linear attention transformer methods for higher throughput, compared to other state-of-the-art systems.
Understanding and reasoning in vision and multimodal systems has also seen a boon. Works like “Task2Sim” and “AdaFuse” demonstrate improved vision model performance if pre-training takes place on synthetic data, and how video action recognition can be boosted by fusing channels from past and current feature maps.
As part of a commitment to leaner AI, the lab teams of Gregory Wornell, the MIT EECS Sumitomo Electric Industries Professor in Engineering, IBM Research’s Chuang Gan, and David Cox, VP for foundational AI at IBM Research and the lab’s IBM director, have shown that model adaptability and data efficiency can go hand in hand. Two approaches, EvoScale and Chain-of-Action-Thought reasoning (COAT), enable language models to make the most of limited data and computation by improving on prior generation attempts through structured iteration, narrowing in on a better response. COAT uses a meta-action framework and reinforcement learning to tackle reasoning-intensive tasks via self-correction, while EvoScale brings a similar philosophy to code generation, evolving high-quality candidate solutions. These techniques help to enable resource-conscious, targeted, real-world deployment.
“The impact of MIT-IBM research on our large language model development efforts cannot be overstated,” says Cox. “We’re seeing that smaller, more specialized models and tools are having an outsized impact, especially when they are combined. Innovations from the MIT-IBM Watson AI Lab help shape these technical directions and influence the strategy we are taking in the market through platforms like watsonx.”
For example, numerous lab projects have contributed features, capabilities, and uses to IBM’s Granite Vision, which provides impressive computer vision designed for document understanding, despite its compact size. This comes at a time when there’s a growing need for extraction, interpretation, and trustworthy summarization of information and data contained in long formats for enterprise purposes.
Other achievements that extend beyond direct research on AI and across disciplines are not only beneficial, but necessary for advancing the technology and lifting up society, concludes the 2025 AAAI panel.
Work from the lab’s Caroline Uhler and Devavrat Shah — both Andrew (1956) and Erna Viterbi Professors in EECS and the Institute for Data, Systems, and Society (IDSS) — along with IBM Research’s Kristjan Greenewald, transcends specializations. They are developing causal discovery methods to uncover how interventions affect outcomes, and identify which ones achieve desired results. The studies include developing a framework that can both elucidate how “treatments” for different sub-populations may play out, like on an ecommerce platform or mobility restrictions on morbidity outcomes. Findings from this body of work could influence the fields of marketing and medicine to education and risk management.
“Advances in AI and other areas of computing are influencing how people formulate and tackle challenges in nearly every discipline. At the MIT-IBM Watson AI Lab, researchers recognize this cross-cutting nature of their work and its impact, interrogating problems from multiple viewpoints and bringing real-world problems from industry, in order to develop novel solutions,” says Dan Huttenlocher, MIT lab co-chair, dean of the MIT Schwarzman College of Computing, and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.
A significant piece of what makes this research ecosystem thrive is the steady influx of student talent and their contributions through MIT’s Undergraduate Research Opportunities Program (UROP), MIT EECS 6A Program, and the new MIT-IBM Watson AI Lab Internship Program. Altogether, more than 70 young researchers have not only accelerated their technical skill development, but, through guidance and support by the lab’s mentors, gained knowledge in AI domains to become emerging practitioners themselves. This is why the lab continually seeks to identify promising students at all stages in their exploration of AI’s potential.
“In order to unlock the full economic and societal potential of AI, we need to foster ‘useful and efficient intelligence,’” says Sriram Raghavan, IBM Research VP for AI and IBM chair of the lab. “To translate AI promise into progress, it’s crucial that we continue to focus on innovations to develop efficient, optimized, and fit-for-purpose models that can easily be adapted to specific domains and use cases. Academic-industry collaborations, such as the MIT-IBM Watson AI Lab, help drive the breakthroughs that make this possible.”
When it comes to artificial intelligence, MIT and IBM were there at the beginning: laying foundational work and creating some of the first programs — AI predecessors — and theorizing how machine “intelligence” might come to be.Today, collaborations like the MIT-IBM Watson AI Lab, which launched eight years ago, are continuing to deliver expertise for the promise of tomorrow’s AI technology. This is critical for industries and the labor force that stand to benefit, particularly in the short term:
When it comes to artificial intelligence, MIT and IBM were there at the beginning: laying foundational work and creating some of the first programs — AI predecessors — and theorizing how machine “intelligence” might come to be.
Today, collaborations like the MIT-IBM Watson AI Lab, which launched eight years ago, are continuing to deliver expertise for the promise of tomorrow’s AI technology. This is critical for industries and the labor force that stand to benefit, particularly in the short term: from $3-4 trillion of forecast global economic benefits and 80 percent productivity gains for knowledge workers and creative tasks, to significant incorporations of generative AI into business processes (80 percent) and software applications (70 percent) in the next three years.
While industry has seen a boom in notable models, chiefly in the past year, academia continues to drive the innovation, contributing most of the highly cited research. At the MIT-IBM Watson AI Lab, success takes the form of 54 patent disclosures, an excess of 128,000 citations with an h-index of 162, and more than 50 industry-driven use cases. Some of the lab’s many achievements include improved stent placement with AI imaging techniques, slashing computational overhead, shrinking models while maintaining performance, and modeling of interatomic potential for silicate chemistry.
“The lab is uniquely positioned to identify the ‘right’ problems to solve, setting us apart from other entities,” says Aude Oliva, lab MIT director and director of strategic industry engagement in the MIT Schwarzman College of Computing. “Further, the experience our students gain from working on these challenges for enterprise AI translates to their competitiveness in the job market and the promotion of a competitive industry.”
“The MIT-IBM Watson AI Lab has had tremendous impact by bringing together a rich set of collaborations between IBM and MIT’s researchers and students,” says Provost Anantha Chandrakasan, who is the lab’s MIT co-chair and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “By supporting cross-cutting research at the intersection of AI and many other disciplines, the lab is advancing foundational work and accelerating the development of transformative solutions for our nation and the world.”
Long-horizon work
As AI continues to garner interest, many organizations struggle to channel the technology into meaningful outcomes. A 2024 Gartner study finds that, “at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025,” demonstrating ambition and widespread hunger for AI, but a lack of knowledge for how to develop and apply it to create immediate value.
Here, the lab shines, bridging research and deployment. The majority of the lab’s current-year research portfolio is aligned to use and develop new features, capacities, or products for IBM, the lab’s corporate members, or real-world applications. The last of these comprise large language models, AI hardware, and foundation models, including multi-modal, bio-medical, and geo-spatial ones. Inquiry-driven students and interns are invaluable in this pursuit, offering enthusiasm and new perspectives while accumulating domain knowledge to help derive and engineer advancements in the field, as well as opening up new frontiers for exploration with AI as a tool.
Findings from the AAAI 2025 Presidential panel on the Future of AI Research support the need for contributions from academia-industry collaborations like the lab in the AI arena: “Academics have a role to play in providing independent advice and interpretations of these results [from industry] and their consequences. The private sector focuses more on the short term, and universities and society more on a longer-term perspective.”
Bringing these strengths together, along with the push for open sourcing and open science, can spark innovation that neither could achieve alone. History shows that embracing these principles, and sharing code and making research accessible, has long-term benefits for both the sector and society. In line with IBM and MIT’s missions, the lab contributes technologies, findings, governance, and standards to the public sphere through this collaboration, thereby enhancing transparency, accelerating reproducibility, and ensuring trustworthy advances.
The lab was created to merge MIT’s deep research expertise with IBM’s industrial R&D capacity, aiming for breakthroughs in core AI methods and hardware, as well as new applications in areas like health care, chemistry, finance, cybersecurity, and robust planning and decision-making for business.
Bigger isn't always better
Today, large foundation models are giving way to smaller, more task-specific models yielding better performance. Contributions from lab members like Song Han, associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), and IBM Research’s Chuang Gan help make this possible, through work such as once-for-all and AWQ. Innovations such as these improve efficiency with better architectures, algorithm shrinking, and activation-aware weight quantization, letting models like language processing run on edge devices at faster speeds and reduced latency.
Consequently, foundation, vision, multimodal, and large language models have seen benefits, allowing for the lab research groups of Oliva, MIT EECS Associate Professor Yoon Kim, and IBM Research members Rameswar Panda, Yang Zhang, and Rogerio Feris to build on the work. This includes techniques to imbue models with external knowledge and the development of linear attention transformer methods for higher throughput, compared to other state-of-the-art systems.
Understanding and reasoning in vision and multimodal systems has also seen a boon. Works like “Task2Sim” and “AdaFuse” demonstrate improved vision model performance if pre-training takes place on synthetic data, and how video action recognition can be boosted by fusing channels from past and current feature maps.
As part of a commitment to leaner AI, the lab teams of Gregory Wornell, the MIT EECS Sumitomo Electric Industries Professor in Engineering, IBM Research’s Chuang Gan, and David Cox, VP for foundational AI at IBM Research and the lab’s IBM director, have shown that model adaptability and data efficiency can go hand in hand. Two approaches, EvoScale and Chain-of-Action-Thought reasoning (COAT), enable language models to make the most of limited data and computation by improving on prior generation attempts through structured iteration, narrowing in on a better response. COAT uses a meta-action framework and reinforcement learning to tackle reasoning-intensive tasks via self-correction, while EvoScale brings a similar philosophy to code generation, evolving high-quality candidate solutions. These techniques help to enable resource-conscious, targeted, real-world deployment.
“The impact of MIT-IBM research on our large language model development efforts cannot be overstated,” says Cox. “We’re seeing that smaller, more specialized models and tools are having an outsized impact, especially when they are combined. Innovations from the MIT-IBM Watson AI Lab help shape these technical directions and influence the strategy we are taking in the market through platforms like watsonx.”
For example, numerous lab projects have contributed features, capabilities, and uses to IBM’s Granite Vision, which provides impressive computer vision designed for document understanding, despite its compact size. This comes at a time when there’s a growing need for extraction, interpretation, and trustworthy summarization of information and data contained in long formats for enterprise purposes.
Other achievements that extend beyond direct research on AI and across disciplines are not only beneficial, but necessary for advancing the technology and lifting up society, concludes the 2025 AAAI panel.
Work from the lab’s Caroline Uhler and Devavrat Shah — both Andrew (1956) and Erna Viterbi Professors in EECS and the Institute for Data, Systems, and Society (IDSS) — along with IBM Research’s Kristjan Greenewald, transcends specializations. They are developing causal discovery methods to uncover how interventions affect outcomes, and identify which ones achieve desired results. The studies include developing a framework that can both elucidate how “treatments” for different sub-populations may play out, like on an ecommerce platform or mobility restrictions on morbidity outcomes. Findings from this body of work could influence the fields of marketing and medicine to education and risk management.
“Advances in AI and other areas of computing are influencing how people formulate and tackle challenges in nearly every discipline. At the MIT-IBM Watson AI Lab, researchers recognize this cross-cutting nature of their work and its impact, interrogating problems from multiple viewpoints and bringing real-world problems from industry, in order to develop novel solutions,” says Dan Huttenlocher, MIT lab co-chair, dean of the MIT Schwarzman College of Computing, and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.
A significant piece of what makes this research ecosystem thrive is the steady influx of student talent and their contributions through MIT’s Undergraduate Research Opportunities Program (UROP), MIT EECS 6A Program, and the new MIT-IBM Watson AI Lab Internship Program. Altogether, more than 70 young researchers have not only accelerated their technical skill development, but, through guidance and support by the lab’s mentors, gained knowledge in AI domains to become emerging practitioners themselves. This is why the lab continually seeks to identify promising students at all stages in their exploration of AI’s potential.
“In order to unlock the full economic and societal potential of AI, we need to foster ‘useful and efficient intelligence,’” says Sriram Raghavan, IBM Research VP for AI and IBM chair of the lab. “To translate AI promise into progress, it’s crucial that we continue to focus on innovations to develop efficient, optimized, and fit-for-purpose models that can easily be adapted to specific domains and use cases. Academic-industry collaborations, such as the MIT-IBM Watson AI Lab, help drive the breakthroughs that make this possible.”
More than 900 alumni and guests returned to campus for panel discussions, faculty lectures, networking opportunities, tours and the launch of the Princeton Graduate School’s 125th anniversary celebration.
More than 900 alumni and guests returned to campus for panel discussions, faculty lectures, networking opportunities, tours and the launch of the Princeton Graduate School’s 125th anniversary celebration.
Recently, more than 1,000 MIT students stepped into the shoes of global decision-makers by trying out En-ROADS, a simulation tool developed to test climate policies, explore solutions, and envision a cleaner and safer environmental future.MIT is committed to climate action, and this year’s new student orientation showcased that commitment. For the first time ever, incoming Leaders for Global Operations (LGO), Executive MBA, Sloan Fellow MBA, MBA, and undergraduate students all explored the capab
Recently, more than 1,000 MIT students stepped into the shoes of global decision-makers by trying out En-ROADS, a simulation tool developed to test climate policies, explore solutions, and envision a cleaner and safer environmental future.
MIT is committed to climate action, and this year’s new student orientation showcased that commitment. For the first time ever, incoming Leaders for Global Operations (LGO), Executive MBA, Sloan Fellow MBA, MBA, and undergraduate students all explored the capabilities of En-ROADS.
“The goal is for MIT to become one of the world’s most prolific, collaborative, and interdisciplinary sources of technological, behavioral, and policy solutions for the global climate challenge over the next decade,” MIT Provost Anantha P. Chandrakasan told an audience of about 300 undergraduates from the Class of 2029. “It is something we need to do urgently, and today is your opportunity to play a role in that bold mission.”
Connecting passion with science for change
In group workshop sessions, students collaborated to create a world in which global warming stays well below 2 degrees Celsius above preindustrial levels — the goal of the 2015 Paris Agreement. Backed by the latest science, the En-ROADS simulator let them explore firsthand how policies like carbon pricing and clean energy investments affect our climate, economy, and health. Over 450 incoming MBA students even role-played as delegates at a global climate summit conference, tasked with negotiating a global agreement to address the harm caused by climate change.
For first-year MBA student Allison Somuk, who played the role of President Xi Jinping of China, the workshop was not only eye-opening about climate, but also altered how she plans to approach her future work and advocacy.
“Before the simulation, I didn’t have data on climate change, so I was surprised to see how close we are to catastrophic temperature increases. What surprised me most was how difficult it was to slow that trajectory. It required significant action and compromise from nearly every sector, not just a few. As someone passionate about improving maternal health care in developing nations, my view of contributing factors has broadened. I now see how maternal health may be affected by a larger system where climate policy decisions directly affect women’s health outcomes.”
MIT Sloan Research Affiliate Andrew Jones, who is also executive director and co-founder of Climate Interactive and co-creator of the En-ROADS tool, presented several sessions during orientation. Looking back on the week, he found the experience personally rewarding.
“Engaging with hundreds of students, I was inspired by the powerful alignment between their passion for climate action and MIT’s increased commitment to delivering on climate goals. This is a pivotal moment for breakthroughs on our campus.”
Other presenters included Jennifer Graham, MIT Sustainability Initiative senior associate director; Jason Jay, MIT Sustainability Initiative director; Krystal Noiseux, MIT Climate Pathways Project associate director; Bethany Patten, MIT Climate Policy Center executive director; and John Sterman, Jay W. Forrester Professor of Management, professor in the MIT Institute for Data, Systems, and Society, and director of the MIT System Dynamics Group.
Chris Rabe, the MIT Climate Project’s Education Program director, was impressed, but not surprised, by how much students learned so quickly as they worked together to solve the problem with En-ROADS.
“By integrating reflection, emotional dynamics, multi-generational perspectives, group work, and inquiry, the En-ROADS simulation provides an ideal foundation for first-year students to explore the breadth of climate and sustainability opportunities at MIT. In the process, students came to recognize the many levers and multi-solving approaches required to address the complex challenges of climate change.”
Inspiring climate leaders
The En-ROADS workshops were a true team effort, made possible with the help of senior staff at MIT Sloan School of Management and the MBA program office, and members of the MIT Sloan Sustainability Initiative, Climate Pathways Project, Climate Policy Center, the Climate Project, Office of the First Year, and entire undergraduate Orientation team.
“Altogether, over a thousand of the newest members of the MIT community have now had a chance to learn for themselves about the climate crisis,” says Sterman, “and what we can do to create a healthier, safer, more prosperous, and more sustainable world — and how they can get involved to bring that world into being, even as first-year undergrads and MBAs.”
By the end of the workshops, the students’ spirits were buoyed. They all had successfully found ways to keep global warming to below 2 C. When asked, “What would you love about being part of this new future you’ve created?,” a more positive, optimistic word cloud came into view. Answers included:
breathing cleaner air;
giving my children a better and safer environment;
my hometown would not be flooded constantly;
rich marine life and protection of reefs;
exciting, new clean industries;
increased socioeconomic equality; and
proof that we as a global community can work together to save ourselves.
First-year MBA student Ruby Eisenbud sums up the sentiment many new MIT students came away with after their workshop.
“Coming to Sloan, one of the questions on my mind was: How can we, as future leaders, make a positive impact related to climate change? While En-ROADS is a simulation, it felt like we experienced, in the smallest way, what it could be like to be a leader navigating the diverging interests of all stakeholders involved in mitigating the impacts of the climate crisis. While the simulation prompted us to face the difficult reality of climate change, it also reinforced my motivation to emphasize climate in my work at Sloan and beyond.”
Gish Jen.Stephanie Mitchell/Harvard Staff Photographer
Arts & Culture
How her life shaped mine
Anna Lamb
Harvard Staff Writer
October 21, 2025
5 min read
Gish Jen’s ties with her mother were important, difficult. She examines why in new novel, ‘Bad Bad Girl.’
Novelist Gish Jen shook her head when it became clear that many critics and readers assumed her 1991 debut, “Typical American,
Gish Jen’s ties with her mother were important, difficult. She examines why in new novel, ‘Bad Bad Girl.’
Novelist Gish Jen shook her head when it became clear that many critics and readers assumed her 1991 debut, “Typical American,” about a child of Chinese immigrants in the U.S., was essentially autobiography.
More than three decades later, and in the wake of her mother’s death in 2020, Gen says she’s finally really tackled her family’s story — although somewhat reimagined.
“All these books are personal, but they’re not personal in the sense that that’s my mother, that’s my father,” said Jen. “They’re much more about my position in the world, and what I see and what I feel about it.”
Her just released novel, “Bad Bad Girl,” takes as its starting point the early life of a character based on her late mother, Loo Shu-hsin, and follows her through a thorny and complicated journey pursuing higher education in America, meeting Jen’s father, and building a life and family.
(Many characters share the names of those who served as inspirations, including Gish, whose given name was Lillian.)
Jen explores her mother’s upbringing in a wealthy Shanghainese family at a time when being born a girl came with certain expectations and firm limits. She also explores the difficult path her mother faced after leaving home, and the reverberations that impacted her own upbringing.
The mother and daughter have a constant dialogue underpinning the narrative. Jen imagines her mother seeing every word of the novel as it’s written, telling the story of her life. “Bad bad girl!” Jen writes as part of their imagined back and forth. “You should not write anything!”
“I had such a difficult relationship with my mother, and I could actually talk to her about it on the page in a way that I never could in real life,” Jen said. “Even in my imagination, she remains frustrating, in a way that was very true to life. Weirdly she did still seem alive.”
“I had such a difficult relationship with my mother, and I could actually talk to her about it on the page in a way that I never could in real life.”
Gish Jen
The book is partially set against the political backdrop of the second Sino-Japanese war followed by the Communist revolution. Jen uses letters between America and the extended family back in China to capture snapshots of the worsening strife.
The letters are, in part, inspired by real missives unearthed between her mother and grandmother. In them, Jen said, she found a tense and complicated dynamic not unlike their own.
“Those letters did not surface until after my mother’s death,” she said. “I was just shocked, although it explained a lot about my relationship with my mother when I saw what her relationship with her mother had been.”
In researching the novel, Jen said she traveled to places significant to her mother across Shanghai. Teaching at NYU Shanghai at the time, she talked to locals and got a sense for what the atmosphere was like in the 1930s.
“I found the school that she had attended. I traipsed all over Shanghai. I traipsed all over Huchen, which is the town where my grandfather had come from. I talked to everybody that I could talk to,” she said.
Part of her motivation for writing the book, Jen said, was to preserve this history for her children and grandchildren.
But also, she said, to honor the struggles faced by her mother and other Chinese women of the time. In “Bad Bad Girl,” Jen writes this as her mother is repeatedly being told that because of her ambition and smarts, it’s too bad she was born a girl.
“My mother was a brilliant woman,” she said during an interview. “And at Harvard there are many, many brilliant women. In this day and age, it’s a little hard for them to even grasp how many women there were like them who were not able to do anything like what they will be able to do. And of course, it’s not just true of my own mother — that’s true of a whole generation.”
“In this day and age, it’s a little hard for them to even grasp how many women there were like them who were not able to do anything like what they will be able to do.”
Gish Jen
Jen herself has been successful in her intellectual endeavors. She has published novels, collections of short fiction, and books of nonfiction.
She graduated from Radcliffe College with an A.B. in English in 1977 and went on to earn an M.F.A. in fiction from the Iowa Writers’ Workshop. She was also a 2001-2002 Radcliffe fellow in fiction and poetry. All that, she said, despite her mother — both in real life and in the book — disagreeing with her choice of career.
“And here I am, 10 books,” she said. “So there you go.”
Moreover, Jen said, she thinks she was able to use her writing, as much as it may have displeased her mother, to capture their difficult dynamic — warts and all.
“She would not have liked it because of all the warts,” Jen said. “But I think it’s very sympathetic. And that sympathy is very genuine.”
Science & Tech
What if AI could help students learn, not just do assignments for them?
Professors find promise in ‘tutor bots’ that offer more flexible, individual, interactive attention in addition to live teaching
Anna Lamb
Harvard Staff Writer
October 21, 2025
4 min read
Greg Kestin.Photos by Niles Singer/Harvard Staff Photographer
Educators’ concerns are running high when it comes
What if AI could help students learn, not just do assignments for them?
Professors find promise in ‘tutor bots’ that offer more flexible, individual, interactive attention in addition to live teaching
Anna Lamb
Harvard Staff Writer
4 min read
Greg Kestin.
Photos by Niles Singer/Harvard Staff Photographer
Educators’ concerns are running high when it comes to AI and how it may undermine teaching and learning. But what if teachers could find ways to use AI to measurably help students learn, rather than simply do their work for them?
Across campus, professors are experimenting with AI “tutor bots” to help students succeed in courses with complicated material when individualized attention is not always feasible. According to Professors Greg Kestin and Kelly Miller, the bots represent one piece in a larger learning strategy that helps students home in on what they do get, and don’t, and do so at their own pace.
With an AI tutor, students can go at their own pace and ask as many questions as they like, any time they like — without fear of being judged, said Kelly Miller.
“What we have found is that when AI tutors are used with tried-and-true research principles kept in mind, they can improve learning as well as enhance time spent with peers and instructors in the classroom,” he said.
In their study, Kestin and Miller compared a typical classroom setting in which students initially get lessons from a human instructor, to an AI-supported “flipped classroom” where students learn core lessons from the tailored bot on their own and bring questions to class.
The bot was built by Kestin using class materials and specific directions on how to frame answers to students.
According to the research, students in the AI-supported classroom reported significantly better engagement with the course and more motivation to learn.
A few factors may be contributing to this outcome, Miller said in the presentation. A main one is the ability to tailor the AI experience to each student.
“Our problem with a traditional classroom [is that] there can be low student engagement and students are typically not getting individualized feedback,” Miller said. “They’re mostly just passively sitting and listening to somebody speak, and so they’re not able to really test their understanding or get feedback on that understanding as they go.”
But with an AI tutor, she added, students can go at their own pace and ask as many questions as they like, any time they like — without fear of being judged.
And that’s the most important aspect of the tutor bot, the researchers said — to make sure it prompts students to think and ask questions, rather than do the thinking for them.
“We have learned that there are some places where it’s not effective to put it in the classroom,” Kestin said. “If you give ChatGPT to a group of students and you say, use it for homework, study, whatever … it turns out the students often do worse on the test because they’re using the AI to think for them. They basically circumvent the critical thinking.”
Instead, Kestin added, AI should help students by giving hints or visual representations of concepts. It can help with data analyses or generate practice problems.
“And then in exams, let AI be like their calculator. Basically, how they would do their work in the real world anyway,” Kestin said.
While their experiment was one of the first to hit Harvard classrooms following the debut of large language models like ChatGPT, many have embraced AI help for students.
Over the summer Harvard Information Technology introduced its own tutor bots and AI assistants, HUbot and PingPong. At Harvard Business School, students taking “Financial Reporting and Control” have access to their own custom tutor bot.
Last year, a section of Math 21A was offered with the AI tutor. Instructor Eva Politou is currently gathering data to assess the success in that course.
According to Kestin, more ongoing work in the field of AI tutors involves “investigating the qualities and types of interactions between students and chatbots that prove most valuable to the learning experience.”
That way, he said, the AI tutors can continue to improve and potentially find their way into even more classrooms. Kestin added that follow-up research is in progress focused on understanding the long-term impacts of tutor bots, including retention.
On any given day, MIT’s famed 825-foot Infinite Corridor serves as a busy, buzzing pedestrian highway, offering campus commuters a quick, if congested, route from point A to B. With the possible exception of MIT Henge twice a year, it doesn’t exactly invite lingering.Thanks to a recent renovation on the first floor of Building 11, the former location of Student Financial Services, there’s now a compelling reason for students to step off the busy throughfare and pause for conversation or respite.
On any given day, MIT’s famed 825-foot Infinite Corridor serves as a busy, buzzing pedestrian highway, offering campus commuters a quick, if congested, route from point A to B. With the possible exception of MIT Henge twice a year, it doesn’t exactly invite lingering.
Thanks to a recent renovation on the first floor of Building 11, the former location of Student Financial Services, there’s now a compelling reason for students to step off the busy throughfare and pause for conversation or respite.
Dubbed by one onlooker as “the spaceport,” the area has been transformed into an airy, multi-functional hub. Nestled inside is the Undergraduate Advising Center (UAC), which launched in 2023 to provide holistic support for students’ personal and academic growth by providing individualized advising for all four years, offering guidance about and connections to MIT resources, and partnering with faculty and departments to ensure a comprehensive advising experience.
Students can now find another key service conveniently located close by: Career Advising and Professional Development has moved into renovated office suites just down the hall, in Building 7.
“It’s just stunning!” marvels Diep Luu, senior associate dean and director of the UAC. “You can’t help but notice the contrast between the historic architecture and the contemporary design. The space is filled with natural light thanks to the floor-to-ceiling windows, and it makes the environment both energizing and comfortable.”
Designed by Merge Architects, the 5,000 square-foot space opens off the Infinite with several informal public spaces for students and community members. These include a series of soaring, vaulted booths with a variety of tables and seating to support multiple kinds of socialization and/or work, a cozy lounge lined with pi wallpaper (carried out to 10,638 digits after 3.14), and the “social stairs” for informal gatherings and workshops. Beyond that, glass doors lead to the UAC office space, which features open workstations, private advising rooms, and conference rooms with Zoom capability.
“We wanted to incorporate as many different kinds of spaces to accommodate as many different kinds of interactions as we could,” explains Kate Trimble, senior associate dean and chief of staff of the Division of Graduate and Undergraduate Education (GUE), who helped guide the renovation project. “After all, the UAC will support all undergraduate students for their entire four-year MIT journey, through a wide variety of experiences, challenges, and celebrations.”
Homing in on the “Boardwalk or Park Place of MIT real estate”
The vision for the new district began to percolate in 2022. At the time, GUE (then known as the Office of the Vice Chancellor, or OVC) was focusing on two separate, key priorities: reconfiguring office space in a post-pandemic, flex-work world; and creating a new undergraduate advising center, in accordance with one of the Task Force 2021 recommendations.
A faculty and staff working group gathered information and ideas from offices and programs that had already implemented “flex-space” strategies, such as Human Resources, IS&T, and the MIT Innovation Headquarters. In thinking about an advising center of the size and scope envisioned, Trimble notes, “we quickly zeroed in on the Building 11 space. It’s such a prominent location. Former Vice Chancellor (and current Vice President for Research) Ian A. Waitz referred to it as the “Boardwalk or Park Place of MIT real estate. And if you’re thinking about a center that’s going to serve all undergraduates, you really want it to be convenient and centrally located — and boy, that’s a perfect space.”
As plans were made to relocate Student Financial Services to a new home in Building E17, the renovation team engaged undergraduate students and advising staff in the design process through a series of charrette-style workshops and focus groups. Students shared feedback about spaces on campus where they felt most comfortable, as well as those they disliked. From staff, the team learned which design elements would make the space as functional as possible, allowing for the variety of interactions they typically have with students.
The team selected Merge Architects for the project, Trimble says, because “they understood that we were not looking to build something that was an architectural temple, but rather a functional and fun space that meets the needs of our students and staff. They’ve been creative and responsive partners.” She also credits the MIT Campus Construction group and the Office of Campus Planning for their crucial role in the renovation. “I can’t say enough good things about them. They’ve been superb guides through a long and complicated process.”
A more student-centric Infinite Corridor
Construction wrapped up in late summer, and the UAC held an open house for students on Registration Day, Sept. 3. It buzzed with activity as students admired the space, chatted with UAC staff, took photos, and met the office mascot, Winni, a friendly chocolate Labrador retriever.
“Students have been amazed by the transformation,” says Luu. “We wanted a space that encourages community and collaboration, one that feels alive and dynamic, and the early feedback suggests that’s exactly what’s happening,” Luu explains. “It also gives us a chance to better connect students not only with what the UAC offers, but also with support across the Institute.
“Last year, the UAC offices were behind these two wooden doors in the Infinite Corridor and you had to know that they were there to get to them,” says junior Caleb Mathewos, who has been a UAC orientation leader and captain over the past two years. “The space is very inviting now. I’ve seen people sitting there and working, or just relaxing between classes. I see my friends every now and then, and I’ll stop by and chat with them. Because it’s so much more open, it makes the UAC feel a lot more accessible to students.”
Senior Calvin Macatantan, who’s been involved with the UAC’s First Generation/Low Income Program since his first year and served as an associate advisor and orientation leader, thinks the new space will make it easier for students — especially first years — to find what they need to navigate at MIT. “Before, resources felt scattered across different parts of the Infinite, even though they had similar missions of advising and supporting students. It's nice that there’s a central, welcoming space where those supports connect, and I think that will make a big difference in how students experience MIT.”
The transformation adds significantly to a trend toward creating more student-centric spaces along the Infinite. In the past few years, MIT has added two new study lounges in Building 3, the DEN and the LODGE, and the Department of Materials Science and Engineering built the DMSE Breakerspace in Building 4. This fall, another office suite along the Infinite will be remodeled into a new tutoring hub.
"It’s wonderful to see the UAC space and the whole advising ‘neighborhood,’ if you will, come to fruition,” says Vice Chancellor for Graduate and Undergraduate Education David L. Darmofal. “The need to strengthen undergraduate advising and the opportunity to do so through an Institute advising hub was an outcome of the Task Force 2021 effort, and it’s taken years of thoughtful reflection by many stakeholders to lay the foundation for such a significant sea change in advising. This space is a tangible, visible commitment to putting students first.”
New vaults, situated just off of the Infinite Corridor in Building 11, provide a unique space for students to take a break between classes, chat with friends, or get a little work done. The vaults were fabricated in New Hampshire.
On 21 October, the University opened a new interdisciplinary centre to improve links between science, technology and policymaking. The director of the centre, Tobias Schmidt, talks about the first activities planned for the Einstein School of Public Policy.
On 21 October, the University opened a new interdisciplinary centre to improve links between science, technology and policymaking. The director of the centre, Tobias Schmidt, talks about the first activities planned for the Einstein School of Public Policy.
The 4MOST (4-metre Multi-Object Spectroscopic Telescope), mounted on the European Southern Observatory’s VISTA telescope in Chile, achieved its ‘first light’ on 18 October 2025: a milestone marking the start of its scientific mission.
Unlike a typical telescope that takes pictures of the sky, 4MOST records spectra – the detailed colours of light from celestial objects – revealing their temperature, motion and chemical makeup. Using 2,436 optical fibres, each thinner than a human hair, the teles
The 4MOST (4-metre Multi-Object Spectroscopic Telescope), mounted on the European Southern Observatory’s VISTA telescope in Chile, achieved its ‘first light’ on 18 October 2025: a milestone marking the start of its scientific mission.
Unlike a typical telescope that takes pictures of the sky, 4MOST records spectra – the detailed colours of light from celestial objects – revealing their temperature, motion and chemical makeup. Using 2,436 optical fibres, each thinner than a human hair, the telescope can study thousands of stars and galaxies at once, splitting their light into 18,000 distinct colour components.
“This is an outstanding feat made possible by an amazing development team,” said Dr Roelof de Jong, Principal Investigator of 4MOST at the Leibniz Institute for Astrophysics Potsdam (AIP), which leads the international project. “The first data already look fantastic. To catch light that’s travelled for billions of years in a fibre the size of a hair is mind-boggling.”
When fully operational, 4MOST will scan the entire southern sky every few minutes, building a catalogue of tens of millions of objects. The data it gathers will help answer fundamental questions about how the Milky Way formed, how galaxies grow, and the mysterious forces of dark matter and dark energy shaping the universe.
The telescope’s first images targeted two specific regions: the Sculptor Galaxy, a star-forming spiral 11.5 million light years away, and Globular Cluster NGC 288, a dense sphere of 100,000 ancient stars on the Milky Way’s edge. The observations demonstrated 4MOST’s ability to capture a wide range of celestial objects in a single shot.
“With first light, we’re opening a new chapter in sky surveys,” said Professor Matthias Steinmetz, Scientific Director at AIP. “4MOST will help to answer fundamental questions about the formation of the Milky Way, the evolution of galaxies and the forces that shape the Universe.”
The project has been more than a decade in the making, involving 30 universities and research institutes – including the University of Cambridge – across Europe and Australia.
Engineers have equipped the VISTA telescope with a new optical camera nearly a metre wide, giving 4MOST one of the largest fields of view in the world for a telescope of its kind. Every 10 to 20 minutes, its fibres can reposition to observe a new set of targets, with a precision that allows it to switch focus across the sky in under two minutes.
Light captured from each fibre travels to a set of three spectrographs that separate it into red, green and blue components, then into finer detail using detectors with a total of 36 megapixels. Two of these spectrographs analyse the full visible and infrared spectrum, while a third focuses on specific colour bands to reveal the chemical fingerprints of stars.
Behind the telescope is an international team of more than 700 scientists working across 25 major science programmes. Some will focus on rare or exotic celestial objects, while others will build large statistical surveys of stars and galaxies.
Planning of nightly observations will be coordinated from the Max Planck Institute for Extraterrestrial Physics in Germany. The European Southern Observatory will operate the system from its Chilean base.
Data from the telescope will be transferred to the University of Cambridge, where researchers at the Institute of Astronomy lead data management. The Cambridge team will extract physical information from the raw spectra before transferring to AIP and ESO, who will distribute the processed results for use by the global astronomy community.
“The 4MOST instrument, with its huge number of optical fibres, has meant the development of a highly sophisticated, high throughput, data flow system, running in Cambridge,” said Dr Nicholas Walton, 4MOST Data Management lead. “Our advanced pipeline delivers the highest quality science data, underpinning the amazing discoveries that 4MOST will enable.”
“This is such an exciting time to be an astronomer, as 4MOST and other next-generation telescopes come online,” said Dr Lisa Kelsey from Cambridge’s Institute of Astronomy, a member of the 4MOST team. “It’s taken a long time and a huge team to get here, but we can’t wait to get to work on some exciting new science.”
Kelsey and her Cambridge colleagues are members of one of the first major research projects to use 4MOST: the Time Domain Extragalactic Survey (TiDES). TiDES will focus on extragalactic transients: brief, dramatic events such as supernova explosions, gamma-ray bursts and stars being torn apart by black holes in distant galaxies. By capturing these fleeting flashes of light as they happen, TiDES will help astronomers understand how stars die, how black holes feed, and how the universe evolves on its most violent timescales.
Over its planned 15-year lifetime, 4MOST is expected to revolutionise astrophysical research. By combining an enormous field of view with the ability to study thousands of objects simultaneously, it will deliver one of the most ambitious spectroscopic surveys ever undertaken.
A powerful new telescope has captured its first glimpse of the cosmos, and could transform our understanding of how stars, galaxies and black holes evolve.
Commercial shipping accounts for 3 percent of all greenhouse gas emissions globally. As the sector sets climate goals and chases a carbon-free future, nuclear power — long used as a source for military vessels — presents an enticing solution. To date, however, there has been no clear, unified public document available to guide design safety for certain components of civilian nuclear ships. A new “Nuclear Ship Safety Handbook” by the MIT Maritime Consortium aims to change that and set the standar
Commercial shipping accounts for 3 percent of all greenhouse gas emissions globally. As the sector sets climate goals and chases a carbon-free future, nuclear power — long used as a source for military vessels — presents an enticing solution. To date, however, there has been no clear, unified public document available to guide design safety for certain components of civilian nuclear ships. A new “Nuclear Ship Safety Handbook” by the MIT Maritime Consortium aims to change that and set the standard for safe maritime nuclear propulsion.
“This handbook is a critical tool in efforts to support the adoption of nuclear in the maritime industry,” explains Themis Sapsis, the William I. Koch Professor of Mechanical Engineering at MIT, director of the MIT Center for Ocean Engineering, and co-director of the MIT Maritime Consortium. “The goal is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry.”
Using research data and standards, combined with operational experiences during civilian maritime nuclear operations, the handbook provides unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations, a topic of growing importance on the national and international stage.
“Right now, the nuclear-maritime policies that exist are outdated and often tied only to specific technologies, like pressurized water reactors,” says Jose Izurieta, a graduate student in the Department of Mechanical Engineering (MechE) Naval Construction and Engineering (2N) Program, and one of the handbook authors. “With the recent U.K.-U.S. Technology Prosperity Deal now including civil maritime nuclear applications, I hope the handbook can serve as a foundation for creating a clear, modern regulatory framework for nuclear-powered commercial ships.”
The recent memorandum of understanding signed by the U.S. and U.K calls for the exploration of “novel applications of advanced nuclear energy, including civil maritime applications,” and for the parties to play “a leading role informing the establishment of international standards, potential establishment of a maritime shipping corridor between the Participants’ territories, and strengthening energy resilience for the Participants’ defense facilities.”
“The U.S.-U.K. nuclear shipping corridor offers a great opportunity to collaborate with legislators on establishing the critical framework that will enable the United States to invest on nuclear-powered merchant vessels — an achievement that will reestablish America in the shipbuilding space,” says Fotini Christia, the Ford International Professor of the Social Sciences, director of the Institute for Data, Systems, and Society (IDSS), and co-director of the MIT Maritime Consortium.
“With over 30 nations now building or planning their first reactors, nuclear energy’s global acceptance is unprecedented — and that momentum is key to aligning safety rules across borders for nuclear-powered ships and the respective ports,” says Koroush Shirvan, the Atlantic Richfield Career Development Professor in Energy Studies at MIT and director of the Reactor Technology Course for Utility Executives.
The handbook, which is divided into chapters in areas involving the overlapping nuclear and maritime safety design decisions that will be encountered by engineers, is careful to balance technical and practical guidance with policy considerations.
Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, says the handbook will significantly benefit the entire maritime community, specifically naval architects and marine engineers, by providing standardized guidelines for design and operation specific to nuclear powered commercial vessels.
“This will assist in enhancing safety protocols, improve risk assessments, and ensure consistent compliance with international regulations,” MacLean says. “This will also help foster collaboration amongst engineers and regulators. Overall, this will further strengthen the reliability, sustainability, and public trust in nuclear-powered maritime systems.”
Anthony Valiaveedu, the handbook’s lead author, and co-author Nat Edmonds, are both students in the MIT Master’s Program in Technology and Policy (TPP) within the IDSS. The pair are also co-authors of a paper published in Science Policy Review earlier this year that offered structured advice on the development of nuclear regulatory policies.
“It is important for safety and technology to go hand-in-hand,” Valiaveedu explains. “What we have done is provide a risk-informed process to begin these discussions for engineers and policymakers.”
“Ultimately, I hope this framework can be used to build strong bilateral agreements between nations that will allow nuclear propulsion to thrive,” says fellow co-author Izurieta.
Impact on industry
“Maritime designers needed a source of information to improve their ability to understand and design the reactor primary components, and development of the 'Nuclear Ship Safety Handbook' was a good step to bridge this knowledge gap,” says Christopher J. Wiernicki, American Bureau of Shipping (ABS) chair and CEO. “For this reason, it is an important document for the industry.”
The ABS, which is the American classification society for the maritime industry, develops criteria and provides safety certification for all ocean-going vessels. ABS is among the founding members of the MIT Maritime Consortium. Capital Clean Energy Carriers Corp., HD Korea Shipbuilding and Offshore Engineering, and Delos Navigation Ltd. are also consortium founding members. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.
“As we consider a net-zero framework for the shipping industry, nuclear propulsion represents a potential solution. Careful investigation remains the priority, with safety and regulatory standards at the forefront,” says Jerry Kalogiratos, CEO of Capital Clean Energy Carriers Corp. “As first movers, we are exploring all options. This handbook lays the technical foundation for the development of nuclear-powered commercial vessels.”
Sangmin Park, senior vice president at HD Korea Shipbuilding and Offshore Engineering, says “The 'Nuclear Ship Safety Handbook' marks a groundbreaking milestone that bridges shipbuilding excellence and nuclear safety. It drives global collaboration between industry and academia, and paves the way for the safe advancement of the nuclear maritime era.”
Maritime at MIT
MIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. Maritime Consortium projects, including the handbook, reflect national priorities aimed at revitalizing the U.S. shipbuilding and commercial maritime industries.
The MIT Maritime Consortium, which launched in 2024, brings together MIT and maritime industry leaders to explore data-powered strategies to reduce harmful emissions, optimize vessel operations, and support economic priorities.
“One of our most important efforts is the development of technologies, policies, and regulations to make nuclear propulsion for commercial ships a reality,” says Sapsis. “Over the last year, we have put together an interdisciplinary team with faculty and students from across the Institute. One of the outcomes of this effort is this very detailed document providing detailed guidance on how such effort should be implemented safely.”
Handbook contributors come from multiple disciplines and MIT departments, labs, and research centers, including the Center for Ocean Engineering, IDSS, MechE’s Course 2N Program, the MIT Technology and Policy Program, and the Department of Nuclear Science and Engineering.
MIT faculty members and research advisors on the project include Sapsis; Christia; Shirvan; MacLean; Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering, director, Center for Advanced Nuclear Energy Systems, and director of science and technology for the Nuclear Reactor Laboratory; and Captain Andrew Gillespy, professor of the practice and director of the Naval Construction and Engineering (2N) Program.
“Proving the viability of nuclear propulsion for civilian ships will entail getting the technologies, the economics and the regulations right,” says Buongiorno. “This handbook is a meaningful initial contribution to the development of a sound regulatory framework.”
“We were lucky to have a team of students and knowledgeable professors from so many fields,” says Edmonds. “Before even beginning the outline of the handbook, we did significant archival and history research to understand the existing regulations and overarching story of nuclear ships. Some of the most relevant documents we found were written before 1975, and many of them were stored in the bellows of the NS Savannah.”
The NS Savannah, which was built in the late 1950s as a demonstration project for the potential peacetime uses of nuclear energy, was the first nuclear-powered merchant ship. The Savannah was first launched on July 21, 1959, two years after the first nuclear-powered civilian vessel, the Soviet ice-breaker Lenin, and was retired in 1971.
Historical context for this project is important, because the reactor technologies envisioned for maritime propulsion today are quite different from the traditional pressurized water reactors used by the U.S. Navy. These new reactors are being developed not just in the maritime context, but also to power ports and data centers on land; they all use low-enriched uranium and are passively cooled. For the maritime industry, Sapsis says, “the technology is there, it’s safe, and it’s ready.”
The inaugural PITCH.nano competition, hosted by MIT.nano’s hard technology accelerator START.nano, provided a platform for early-stage startups to present their innovations to MIT and Boston’s hard-tech startup ecosystem.The grand prize winner was Active Surfaces, a startup that is generating renewable energy exactly where it is going to be used through lightweight, flexible solar cells. Active Surfaces says its ultralight, peel-and-stick panels will reimagine how we deploy photovoltaics in the
The inaugural PITCH.nano competition, hosted by MIT.nano’s hard technology accelerator START.nano, provided a platform for early-stage startups to present their innovations to MIT and Boston’s hard-tech startup ecosystem.
The grand prize winner was Active Surfaces, a startup that is generating renewable energy exactly where it is going to be used through lightweight, flexible solar cells. Active Surfaces says its ultralight, peel-and-stick panels will reimagine how we deploy photovoltaics in the built environment.
Shiv Bhakta MBA ’24, SM ’24, CEO and co-founder, delivered the winning presentation to an audience of entrepreneurs, investors, startup incubators, and industry partners at PITCH.nano on Sept. 30. Active Surfaces received the grand prize of 25,000 nanoBucks — equivalent to $25,000 that can be spent at MIT.nano facilities.
Why has MIT.nano chosen to embrace startup activity as much as we do? asked Vladimir Bulović, MIT.nano faculty director, at the start of PITCH.nano. “We need to make sure that entrepreneurs can be born out of MIT and can take the next technical ideas developed in the lab out into the market, so they can make the next millions of jobs that the world needs.”
The journey of a hard-tech entrepreneur takes at least 10 years and 100 million dollars, explained Bulović. By linking open tool facilities to startup needs, MIT.nano can make those first few years a little bit easier, bringing more startups to the scale-up stage.
“Getting VCs [venture capitalists] to invest in hard tech is challenging,” explained Joyce Wu SM ’00, PhD ’07, START.nano program manager. “Through START.nano, we provide discounted access to MIT.nano’s cleanrooms, characterization tools, and laboratories for startups to build their prototypes and attract investment earlier and with reduced spend. Our goal is to support the translation of fundamental research to real-world solutions in hard tech.”
In addition to discounted access to tools, START.nano helps early-stage companies become part of the MIT and Cambridge innovation network. PITCH.nano, inspired by the MIT 100K Competition, was launched as a new opportunity this year to introduce these hard-tech ventures to the investor and industry community. Twelve startups delivered presentations that were evaluated by a panel of four judges who are, themselves, venture capitalists and startup founders.
“It is amazing to see the quality, diversity, and ingenuity of this inspiring group of startups,” said judge Brendan Smith PhD ’18, CEO of SiTration, a company that was part of the inaugural START.nano cohort. “Together, these founders are demonstrating the power of fundamental hard-tech innovation to solve the world’s greatest challenges, in a way that is both scalable and profitable.”
“MIT.nano has been instrumental in compressing our time to market, especially as a company building a novel, physical product,” said Bhakta. “Access to world-class characterization tools — normally out of reach for startups — lets us validate scale-up much faster. The START.nano community accelerates problem-solving, and the nanoBucks award is directly supporting the development of our next prototypes headed to pilot.”
In addition to the grand prize, a 5,000 nanoBucks audience choice award went to Advanced Silicon Group, a startup that is developing a next-generation biosensor to improve testing in pharma and health tech.
Now in its fifth year, START.nano has supported 40 companies spanning a diverse set of market areas — life sciences, clean tech, semiconductors, photonics, quantum, materials, and software. Fourteen START.nano companies have graduated from the program, proving that START.nano is indeed succeeding in its mission to help early-stage ventures advance from prototype to manufacturing. “I believe MIT.nano has a fantastic opportunity here,” said judge Davide Marini, PhD ’03, co-founder and CEO of Inkbit, “to create the leading incubator for hard tech entrepreneurs worldwide.”
START.nano accepts applications on a monthly basis. The program is made possible through the generous support of FEMSA.
Twelve startups from MIT.nano’s hard technology accelerator, START.nano, presented at the first annual PITCH.nano competition. Back row (l-r): John Ho, PhD ’09; Davide Marini, PhD ’03; Shiv Bhakta MBA ’24, SM ’24; Joshua Yang; Uroš Kuzmanović; Harish Banda; Joyce Wu SM ’00, PhD ’07; and Vladimir Bulović, MIT.nano faculty director. Middle row (l-r): Brendan Smith PhD ‘18; Jacob Grose; Chis Taylor; and Ana Cornell. Front row (l-r): Laura Lande-Diner; Laura Andre; Marcie Black ’94, MEng ’95, PhD ’03; Mani Morampudi; and Bill Jacobson.
Since launching in 2008, the MIT Global Seed Funds (GSF) program has awarded roughly $30 million to more than 1,300 high-impact faculty research projects across the world, spurring consequential collaborations on topics that include swine-fever vaccines, deforestation of the Amazon, the impact of “coral mucus” on the Japanese island of Okinawa, and the creation of an AI-driven STEM-education lab within Nigeria’s oldest university.Administered by the MIT Center for International Studies (CIS) and
Since launching in 2008, the MIT Global Seed Funds (GSF) program has awarded roughly $30 million to more than 1,300 high-impact faculty research projects across the world, spurring consequential collaborations on topics that include swine-fever vaccines, deforestation of the Amazon, the impact of “coral mucus” on the Japanese island of Okinawa, and the creation of an AI-driven STEM-education lab within Nigeria’s oldest university.
Administered by the MIT Center for International Studies (CIS) and open to MIT faculty and principal investigators, GSF boasts a unique funding structure consisting of both a general fund for unrestricted geographical use and more than 20 different specific funds for individual universities, regions, and countries.
GSF projects often tackle critical challenges that require international solutions, culminating in patents, policy changes, and published papers in journals such as Nature and Science. Some faculty-led projects from this year include Professor Hugh Herr’s modular crutches for people with disabilities in Sierra Leone, Research Scientist Paolo Santi’s large-language models to predict energy consumption in grocery stores, and Professor Ernest Fraenkel’s development of mRNA therapies for the neurodegenerative disease amyotrophic lateral sclerosis (ALS).
GSF Assistant Director Justin Leahey, who is managing director of the MIT-Germany and MIT-Switzerland programs, says that GSF has expanded exponentially over the years, including most recently into the Czech Republic, Norway, Slovakia, and — starting in fall 2025 — Hungary. This year there were a grand total of roughly 300 research proposals submitted for consideration, with many of the accepted proposals including the active participation of students at both the graduate and undergraduate level.
Central to GSF’s work is “reciprocal exchange” — the concept of collaborators in and out of MIT sharing their work and exchanging ideas in an egalitarian way, rather than bringing a one-sided approach to different research challenges.Frequent collaborator Raffaella Gozzelino, a neurology researcher and principal investigator at NOVA Medical School in Portugal who works closely with Jacquin Niles, an MIT professor of biological engineering, says that research is more impactful “when specialized knowledge integrates local realities and reveals potential solutions to national challenges,” and views the spirit of reciprocal exchange as something that revolves around “sharing knowledge and co-creating solutions that empower one another and build bridges across borders.”
For Cindy Xie ’24, MCP ’25, her master’s thesis emerged from the first-ever GSF-supported research internship in Cape Verde, where she worked with Niles and Gozzelino to explore the impact of climate change on anemia in the country of 500,000 people, focusing specifically on its largest island of Santiago. Xie says that she was struck by the intertwined intersectional nature of the issues of nutrition, climate, and infection in Santiago, home to the nation’s capital city of Praia. For example, Xie and Gozzelino’s team found that respondents perceived a rise in costs of fresh produce over time, exacerbated by drought and unpredictable agricultural conditions, which in turn impacted existing nutritional deficiencies and increased residents’ susceptibility to mosquito-borne diseases.
“Though this multidisciplinary research lens is challenging in terms of actual project implementation, it was meaningful in that it generated insights and connections across fields that allow our research to be better contextualized within the experiences of the communities that it impacts,” Xie says.
Gozzelino says that it has been meaningful to witness how scientific research can transcend academic boundaries and generate real impact. She says that, by examining the effects of climate change on infectious diseases and nutrition in Cape Verde, the team will be able to build a framework that can directly inform public policy.
“Contributing to a project that underscores the importance of integrating scientific knowledge into decision-making will safeguard vulnerable populations and make them feel included in the society they belong,” Gozzelino says. “This collaboration has revealed the enormous potential of international partnerships to strengthen local research capacity and address global challenges.”
During her time in Cape Verde working with Xie and Gozzelino, Amulya Aluru ’23, MEng ’24 got to meet with 20 local officials and connect with new people in a wide range of roles across the country, helping her “recognize the power of interpersonal relationships and collaboration” in public health research. She says that the structure of the GSF grant gave her the unique experience of having mentors and coworkers in three different countries, spanning Cape Verde, the United States, and Portugal.
Aluru says that this kind of cross-pollination “enabled me to strengthen my research with different perspectives and challenged me to approach my work in a way that I’d never done before, with a more global mindset.”
Xie similarly expresses her deep appreciation for the long-term relationships she has built through the project and the linkages between Santiago and Boston, which itself is home to one of the world’s largest Cape Verdean diasporas. “As a student, this was a valuable experience to inform the approaches to collaboration that I would like to implement in my own future work,” Xie says.
More broadly, Gozzelino sees GSF grants like the Cape Verde one as being not simply a vehicle for financial support, but “a catalyst for turning partnerships into long-term impactful collaborations, demonstrating how global networks can aid the development of human capital.”
GSF’s long history of reaching across departments and borders has led to multiple meaningful academic collaborations that have since come to span continents — and decades. In 2015, Professor Jörn Dunkel — an applied mathematician at MIT — kicked off work on a data-sharing repository for bacterial biofilms with the interdisciplinary German microbiologist Knut Drescher, then a professor of biophysics at Philipps-Universität Marburg in Germany. Dunkel and Drescher have since co-authored more than 15 papers together in publications like Nature Physics and ScienceAdvances alongside their teams of graduate students and postdocs, even with Drescher having moved locations and crossed country lines to Switzerland as a faculty member at the University of Basel’s Biozentrum Center for Molecular Life Sciences.
“Our collaboration often creates great synergy by combining my team’s experiments with the theory from Jörn’s team,” says Drescher. “It is a great joy to see his perspective on the experimental systems we are working on. He is able to really understand and engage with experimental biological data, identifying patterns in seemingly distant biological systems.”
In explaining the CIS initiative’s success, Leahey points to the synergistic, academically eclectic, cross-disciplinary nature of the program. “[GSF] is a research fund that doesn’t ‘fund research’ in the conventional sense,” he says. “It seeds early-stage collaboration and lets people explore.”
GSF’s long history of reaching across departments and borders has led to multiple meaningful academic collaborations that have since come to span continents — and decades.
Since launching in 2008, the MIT Global Seed Funds (GSF) program has awarded roughly $30 million to more than 1,300 high-impact faculty research projects across the world, spurring consequential collaborations on topics that include swine-fever vaccines, deforestation of the Amazon, the impact of “coral mucus” on the Japanese island of Okinawa, and the creation of an AI-driven STEM-education lab within Nigeria’s oldest university.Administered by the MIT Center for International Studies (CIS) and
Since launching in 2008, the MIT Global Seed Funds (GSF) program has awarded roughly $30 million to more than 1,300 high-impact faculty research projects across the world, spurring consequential collaborations on topics that include swine-fever vaccines, deforestation of the Amazon, the impact of “coral mucus” on the Japanese island of Okinawa, and the creation of an AI-driven STEM-education lab within Nigeria’s oldest university.
Administered by the MIT Center for International Studies (CIS) and open to MIT faculty and principal investigators, GSF boasts a unique funding structure consisting of both a general fund for unrestricted geographical use and more than 20 different specific funds for individual universities, regions, and countries.
GSF projects often tackle critical challenges that require international solutions, culminating in patents, policy changes, and published papers in journals such as Nature and Science. Some faculty-led projects from this year include Professor Hugh Herr’s modular crutches for people with disabilities in Sierra Leone, Research Scientist Paolo Santi’s large-language models to predict energy consumption in grocery stores, and Professor Ernest Fraenkel’s development of mRNA therapies for the neurodegenerative disease amyotrophic lateral sclerosis (ALS).
GSF Assistant Director Justin Leahey, who is managing director of the MIT-Germany and MIT-Switzerland programs, says that GSF has expanded exponentially over the years, including most recently into the Czech Republic, Norway, Slovakia, and — starting in fall 2025 — Hungary. This year there were a grand total of roughly 300 research proposals submitted for consideration, with many of the accepted proposals including the active participation of students at both the graduate and undergraduate level.
Central to GSF’s work is “reciprocal exchange” — the concept of collaborators in and out of MIT sharing their work and exchanging ideas in an egalitarian way, rather than bringing a one-sided approach to different research challenges.Frequent collaborator Raffaella Gozzelino, a neurology researcher and principal investigator at NOVA Medical School in Portugal who works closely with Jacquin Niles, an MIT professor of biological engineering, says that research is more impactful “when specialized knowledge integrates local realities and reveals potential solutions to national challenges,” and views the spirit of reciprocal exchange as something that revolves around “sharing knowledge and co-creating solutions that empower one another and build bridges across borders.”
For Cindy Xie ’24, MCP ’25, her master’s thesis emerged from the first-ever GSF-supported research internship in Cape Verde, where she worked with Niles and Gozzelino to explore the impact of climate change on anemia in the country of 500,000 people, focusing specifically on its largest island of Santiago. Xie says that she was struck by the intertwined intersectional nature of the issues of nutrition, climate, and infection in Santiago, home to the nation’s capital city of Praia. For example, Xie and Gozzelino’s team found that respondents perceived a rise in costs of fresh produce over time, exacerbated by drought and unpredictable agricultural conditions, which in turn impacted existing nutritional deficiencies and increased residents’ susceptibility to mosquito-borne diseases.
“Though this multidisciplinary research lens is challenging in terms of actual project implementation, it was meaningful in that it generated insights and connections across fields that allow our research to be better contextualized within the experiences of the communities that it impacts,” Xie says.
Gozzelino says that it has been meaningful to witness how scientific research can transcend academic boundaries and generate real impact. She says that, by examining the effects of climate change on infectious diseases and nutrition in Cape Verde, the team will be able to build a framework that can directly inform public policy.
“Contributing to a project that underscores the importance of integrating scientific knowledge into decision-making will safeguard vulnerable populations and make them feel included in the society they belong,” Gozzelino says. “This collaboration has revealed the enormous potential of international partnerships to strengthen local research capacity and address global challenges.”
During her time in Cape Verde working with Xie and Gozzelino, Amulya Aluru ’23, MEng ’24 got to meet with 20 local officials and connect with new people in a wide range of roles across the country, helping her “recognize the power of interpersonal relationships and collaboration” in public health research. She says that the structure of the GSF grant gave her the unique experience of having mentors and coworkers in three different countries, spanning Cape Verde, the United States, and Portugal.
Aluru says that this kind of cross-pollination “enabled me to strengthen my research with different perspectives and challenged me to approach my work in a way that I’d never done before, with a more global mindset.”
Xie similarly expresses her deep appreciation for the long-term relationships she has built through the project and the linkages between Santiago and Boston, which itself is home to one of the world’s largest Cape Verdean diasporas. “As a student, this was a valuable experience to inform the approaches to collaboration that I would like to implement in my own future work,” Xie says.
More broadly, Gozzelino sees GSF grants like the Cape Verde one as being not simply a vehicle for financial support, but “a catalyst for turning partnerships into long-term impactful collaborations, demonstrating how global networks can aid the development of human capital.”
GSF’s long history of reaching across departments and borders has led to multiple meaningful academic collaborations that have since come to span continents — and decades. In 2015, Professor Jörn Dunkel — an applied mathematician at MIT — kicked off work on a data-sharing repository for bacterial biofilms with the interdisciplinary German microbiologist Knut Drescher, then a professor of biophysics at Philipps-Universität Marburg in Germany. Dunkel and Drescher have since co-authored more than 15 papers together in publications like Nature Physics and ScienceAdvances alongside their teams of graduate students and postdocs, even with Drescher having moved locations and crossed country lines to Switzerland as a faculty member at the University of Basel’s Biozentrum Center for Molecular Life Sciences.
“Our collaboration often creates great synergy by combining my team’s experiments with the theory from Jörn’s team,” says Drescher. “It is a great joy to see his perspective on the experimental systems we are working on. He is able to really understand and engage with experimental biological data, identifying patterns in seemingly distant biological systems.”
In explaining the CIS initiative’s success, Leahey points to the synergistic, academically eclectic, cross-disciplinary nature of the program. “[GSF] is a research fund that doesn’t ‘fund research’ in the conventional sense,” he says. “It seeds early-stage collaboration and lets people explore.”
GSF’s long history of reaching across departments and borders has led to multiple meaningful academic collaborations that have since come to span continents — and decades.
Nation & World
AI presents challenges to journalism — but also opportunities
Sotiris Sideris.Photos by Niles Singer/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
October 20, 2025
4 min read
Data editor explains how digital tools sift through mountains of government, business data to find ways to make things better or unearth crimes
The surge of AI produced artic
AI presents challenges to journalism — but also opportunities
Sotiris Sideris.
Photos by Niles Singer/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
4 min read
Data editor explains how digital tools sift through mountains of government, business data to find ways to make things better or unearth crimes
The surge of AI produced articles has ignited a series of concerns about the accuracy of news amid the dwindling number of working journalists who serve as a counterforce against the dissemination of inaccurate or false information.
Certainly AI does pose ethical and others challenges, but it also offers reporters greater opportunities to do more high-impact, consequential stories, according to Sotiris Sideris, data editor at the Center for Collaborative Investigative Journalism and at Reporters United in Greece during a talk at the Center for European Studies on Oct. 14.
Data- and generative AI-driven tools allow reporters to analyze in a timelier fashion vast troves of government and business data and identify important patterns that point the way to improvements or uncover questionable, or even illegal, activities, he said.
“The question today isn’t whether we are using AI in journalism, because we do it already,” but whether “we can do journalism without outsourcing our skepticism, our ethics, and our sense of accountability, both as journalists ourselves and the accountability we are asking people and organizations that hold power to provide,” said Sideris who is studying how generative AI can better assist investigative reporting as a 2026 Nieman Fellow.
Sideris shared how AI tools helped uncover a fleet of Greek-owned ships that was stealthily transporting Russian oil to Europe in violation of sanctions.
Human reporting, writing, and editing are still essential to getting stories, but data and generative AI can play an important dual role as a “microscope” that helps reporters quickly cut through the information “noise” hidden within disparate documents and reports, he said.
At the same time, these tools can serve as a “mirror that reflects our own biases and our own stereotypes” that can mislead journalists into drawing the wrong conclusions, he said.
Sideris shared with the students and journalists present how AI tools helped him and his colleagues uncover a fleet of Greek-owned ships that was stealthily transporting Russian oil to Europe in violation of sanctions.
They were also able to show how the popularity of Airbnb since its introduction in Greece less than a decade ago drove up rents and home sale prices, which led to widespread displacement of Athens residents as areas became unaffordable and foreclosures spiked.
Reporters were able to pull and scrutinize data from records and filings publicly available on the internet and piece together the complicated global network of property for a follow-on piece. That report showed how foreclosed properties were auctioned off on platforms not accessible to the public and bought at steep discounts by the very banks that had foreclosed on the homes.
Journalists need to learn how to use data and generative AI, understand their power and their limitations, and still do the old-fashioned hard work of vetting and documenting what they know and how they know it, he said.
In this new era of journalism, “total transparency” is more imperative than ever.
“When we ask for somebody to be transparent, we cannot ask them without us being transparent in the very, very beginning of the story about how we are using the tools, about who is funding our work, about any editorial decision that we make along the way,” said Sideris.
Whether in long investigative pieces or daily newsgathering, journalists need to be up front about how they’re working with AI, he said.
“There is no reason to conceal it. We know that you are using it; we know that everybody is using it. It’s not a secret anymore. It’s not something to feel bad about.
Alan Robert Whitney ’66, SM ’67, PhD ’74, a longtime research scientist at the MIT Haystack Observatory who also served its associate director and interim director, died on Sept. 28 at age 81.Whitney was a key contributor to the accomplishments and reputation of Haystack Observatory, having led the development of innovative technologies to advance the powerful radio science technique of very long baseline interferometry (VLBI). He ascended to the rank of MIT principal research scientist, served
Alan Robert Whitney ’66, SM ’67, PhD ’74, a longtime research scientist at the MIT Haystack Observatory who also served its associate director and interim director, died on Sept. 28 at age 81.
Whitney was a key contributor to the accomplishments and reputation of Haystack Observatory, having led the development of innovative technologies to advance the powerful radio science technique of very long baseline interferometry (VLBI). He ascended to the rank of MIT principal research scientist, served for many years as associate director of the observatory, and in 2007–08 took the reins as interim director. In 2011, he was awarded an MIT Excellence award.
From an early age, Whitney displayed extraordinary talent. Raised in Wyoming, as a high schooler he won the state science fair in 1962 by building a satellite telemetry receiver, which he designed and built from transistors and other discrete components in a barn on his family’s dairy farm. He enrolled at MIT and completed a five-year master’s degree via a cooperative internship program with Bell Laboratories, subsequently earning his PhD in electrical engineering.
Haystack Director Phil Erickson says, “Alan’s personality and enthusiasm were infectious, and his work represented the best ideals of the Haystack and MIT research enterprise — innovative, curious, and exploring the frontiers of basic and applied science and technology.”
In the late 1960s, as part of his PhD work, he was heavily involved in the pioneering development of VLBI, an extraordinary technique that yielded direct measurements of continental drift and information on distant radio sources at unprecedented angular resolution. A landmark paper led by Whitney demonstrated the presence of apparent superluminal (faster than light) motion of radio sources, which was explained as highly relativistic motion aligned toward the Earth. He spent the rest of his long and productive career at Haystack, pushing forward VLBI technology to ever-greater heights and ever-more impactful scientific capabilities.
“Alan was a technology pillar, a stalwart builder and worldwide ambassador of Haystack, and a leading figure of the VLBI geodetic community who inspired generations of scientists and engineers,” says Pedro Elosegui, leader of the Haystack geodesy group. “He contributed fundamentally to the vision and design of the VLBI Geodetic Observing System, outlining a path to a next-generation VLBI system with unprecedented new capabilities to address emerging space geodesy science needs such as global sea-level rise.”
The early days of VLBI demanded heroic and grueling efforts, traveling the world with exotic devices in hand-carried luggage, mounting and dismounting thousands of magnetic tapes every couple of minutes for hours on end, troubleshooting complex and sensitive instrumentation, and writing highly specialized software for the mainframe computers of the day. Whitney was fully engaged on all these fronts. By the early 1980s, the Mark III recording and correlation systems, whose development was led by Whitney, were established as the state of the art in VLBI technology, and a standard around which the global VLBI community coalesced.
Whitney later led the transition to VLBI disk-based recording. Specialized and robust Mark V systems optimized for shipping logistics and handling were transferred to industry for commercialization, leading once again to widespread global adoption of Haystack-developed VLBI technology. Consistently across all these developments, Whitney identified and exploited the most relevant and practical emerging technologies for the Haystack VLBI mission in hardware, software, and computing infrastructure.
In the latter part of his career, Whitney continued to innovate, pushing the technical boundaries of VLBI. A key advance was the Mark 6 (Mk6) recording system, capable of yet faster recording, higher sensitivity, and more robustness. The Mk6 recorders’ essential capability allowed the creation of the Event Horizon Telescope, which famously yielded the first image of the shadow of a black hole. Mk6 recorders are now used to routinely record data roughly 100,000 times faster than the computer tapes used at the start of his career.
As a senior technical and scientific leader, Whitney provided broad leadership and consultation to Haystack, and worked on a number of projects outside of the VLBI world. He served as interim Haystack director from January 2007 until a permanent director was appointed in September 2008. He also engaged with the development project for the international Murchison Widefield Array (MWA) in Australia, focused on frontier research studying early universe development. Whitney assumed the role of MWA project director from 2008 until groups in Australia took over the construction phase of the project a few years later. Until his full retirement in 2012, Whitney continued to provide invaluable technical insights and support at Haystack, and was a trusted and wise counsel to the Haystack Director’s Office. In 2020, Whitney was a co-recipient of the 2020 Breakthrough Prize in Fundamental Physics awarded to the Event Horizon Telescope Collaboration.
Alan Whitney was a top-notch technologist with a broad perspective that allowed him to guide Haystack to decades of influential leadership in the development and refinement of the VLBI technique. His dedication at MIT to the observatory, its people, and its mission were a source of inspiration to many at Haystack and well beyond. He was widely admired for the clarity of his thought, the sharpness of his intellect, and his genial and friendly nature. His numerous local, national, and global colleagues will feel his absence.
Alan Whitney was “a technology pillar, a stalwart builder and worldwide ambassador of Haystack, and a leading figure of the VLBI geodetic community who inspired generations of scientists and engineers,” says Pedro Elosegui, leader of the Haystack Observatory geodesy group.
MJ Grein.Veasey Conway/Harvard Staff Photographer
Health
When communication could mean life or death
Workshops on delivering better medical care to deaf patients stress importance of sign language, body language
Sy Boles
Harvard Staff Writer
October 20, 2025
3 min read
As a child, MJ Grein would interpret for her deaf mother during medical appointments. At a recent workshop designed to he
Workshops on delivering better medical care to deaf patients stress importance of sign language, body language
Sy Boles
Harvard Staff Writer
3 min read
As a child, MJ Grein would interpret for her deaf mother during medical appointments. At a recent workshop designed to help healthcare professionals communicate more effectively with deaf and hard-of-hearing patients, she shared personal stories with the trademark humor of someone who has made the best of a difficult experience.
She encouraged attendees to learn more sign language, ideally from deaf instructors.
“Everyone can learn sign language. Deaf people can’t learn to hear, and that’s the difference.”
MJ Grein
“Everyone can learn sign language,” said the former sign language interpreter and executive assistant at Harvard Medical School’s Countway Library. “Deaf people can’t learn to hear, and that’s the difference.”
For the deaf and hard-of-hearing, interacting with the healthcare system can be a major source of stress, Grein said. A doctor who knows some basic signs, or even how to work with an interpreter, can substantially improve patients’ experiences.
Grein offered practical strategies. She emphasized the importance of engaging directly with the patient and being attentive to body language and facial expressions.
“You can tell a lot by a person’s facial expressions, even when you don’t know sign language,” she said.
Attendees learned basic vocabulary that might be helpful in medical encounters — signs for words like pain, help, emergency, nurse, doctor, and breathing. “We all know how essential it is to calm someone down, to take deep breaths, in order to help them,” Grein said. Even a few signs, she added, can help make a patient feel more secure.
“I want to demystify it, if you will,” said Grein. “How do you work with an interpreter, if you’ve never worked with one? What are the expectations?”
Grein’s series — launched in September in recognition of Deaf Awareness Month — is one of many programs offered at the medical library, which has become a hub of community engagement and professional development in the Longwood Medical Area, according to Countway’s interim director, Len Levin.
“The library is no longer just a place where we have books and online resources,” Levin said. “It’s a place where we identify all sorts of possible needs for our medical community and try to address them.”
The Countway, which celebrated its 60th anniversary this year, is one of the largest repositories of medical information in the world, Levin said, adding that a 2019-2023 renovation has seen foot traffic more than double.
“Everyone really sees the Countway as sort of like a campus center, which it never was before,” Levin explained, adding that one key reason is enhanced programming, including series like Grein’s.
During Grein’s recent session she explained why she’s become a strong advocate against turning to the children of deaf adults to translate for their parents, especially in sensitive situations.
“Kids are kids,” Grein said. “It’s not their job.”
Science & Tech
You see Saturn’s rings. She sees hidden number theory.
Laura DeMarco.Stephanie Mitchell/Harvard Staff Photographer
Sy Boles
Harvard Staff Writer
October 20, 2025
4 min read
Math professor finds psychedelic beauty in complex sequences
Why are there gaps in the asteroid belt?
The answer can be found in dynamical systems, the branch of mathematics that deals with the
You see Saturn’s rings. She sees hidden number theory.
Laura DeMarco.
Stephanie Mitchell/Harvard Staff Photographer
Sy Boles
Harvard Staff Writer
4 min read
Math professor finds psychedelic beauty in complex sequences
Why are there gaps in the asteroid belt?
The answer can be found in dynamical systems, the branch of mathematics that deals with the evolution of chaotic systems over time. The field was the subject of a recent Harvard Radcliffe Institute talk by Laura DeMarco.
“It turns out there’s a lot of hidden number theory in the planets’ very motion, for example, in the structure of the rings around Saturn,” explained DeMarco, Radcliffe Alumnae Professor at Harvard Radcliffe Institute and a professor of mathematics in the Harvard Faculty of Arts and Sciences. There are places asteroids simply cannot be, she went on, because of the underlying number theory that “prohibits certain behavior.”
To find equations that predict real-world systems such as planetary motion, DeMarco researches sequences of numbers that are generated recursively. Perhaps the most best-known example is the Fibonacci sequence, a linear recursion in which each number is the sum of the two numbers that came before it, starting with 0 and 1. Visual representations of the Fibonacci sequences appear in the natural world — for example, in the scales of pine cones, in pineapple fruitlets, and in the structure of nautilus shells.
“This suggests that somehow the Fibonacci numbers can be used to encode some aspects of the natural world,” DeMarco said, just like more complex nonlinear sequences show up in the chaotic and seemingly unpredictable structures of planetary motion, weather patterns, and changes in animal populations.
“A lot of what I think about may or may not be useful for real life, everyday life,” she said. “But it’s trying to understand how the complexity of numbers, and the interactions of numbers, force geometry, force certain geometric structures.”
A visualization of the Mandelbrot Set.
Photo courtesy of Laura DeMarco
DeMarco is fascinated by the complexity of seemingly simple sequences. For example, every number in the Fibonacci sequence, except 8 and 144, has a prime factor that has not appeared earlier in the sequence.
“At every step, you’re going to see at least one new [prime] number, suggesting that there’s something rich and complex about this sequence,” DeMarco said. “Any old sequence is not going to have this feature.”
The theorem that almost every number in certain recursive sequences like the Fibonacci has at least one prime divisor that does not appear earlier in the sequence was first proved in 1913 by the mathematician R.D. Carmichael. What’s not known is whether a similar theorem holds true for nonlinear recursive sequences. Searching for the answer to that question has led DeMarco from the numeric to the psychedelic — or at least, the visually psychedelic.
One of DeMarco’s favorite visualizations looks remarkably like an eye.
Photo courtesy of Laura DeMarco
DeMarco graphs nonlinear sequences into complex, recursive visualizations, looking for patterns of symmetry and asymmetry. The asymmetry in her visualizations provides evidence that multiple recursions, when run simultaneously and following very simple sets of rules, can generate extraordinary complexity.
“I don’t intend to create art, per se, but somehow and sometimes the pictures are surprisingly beautiful,” she said.
One of her favorite visualizations looks remarkably like an eye. It seems perfectly symmetrical, but it contains slight variations, and, when you zoom in, a tiny copy of a famous fractal shape, the Mandelbrot Set.
Like the Fibonacci sequence, the Mandelbrot Set is defined by a simple rule: It contains all the complex numbers for which a nonlinear sequence, when repeated, does not result in ever-larger values — that is, it does not diverge to infinity. If the complex number used as a constant yields a result greater than 2, it’s not part of the Mandelbrot Set.
In DeMarco’s eye visualization, the Mandelbrot set shows up on one side but not the other, acting, in her words, as a “certificate of independence” between the two sides. Those tiny distinctions, invisible to the naked eye, allow her to draw conclusions about the underlying math.
“I don’t intend to create art, per se, but somehow and sometimes the pictures are surprisingly beautiful,” said DeMarco.
Photo courtesy of Laura DeMarco
Of course, a good visualization is just the first step. “I need some logical argument to build on what’s known, that’s not just true for what you see in the picture, but that is always true.”
DeMarco shared more about her work in a 2024 conversation on the Harvard Radcliffe Institute’s podcast, BornCurious.
Researchers at Cornell Tech and Cornell Bowers engaged directly with 15 content moderators on Reddit to see exactly how they try to preserve the news sharing site's humanity in an increasingly AI-infused world.
Researchers at Cornell Tech and Cornell Bowers engaged directly with 15 content moderators on Reddit to see exactly how they try to preserve the news sharing site's humanity in an increasingly AI-infused world.
Cambridge University Boat Club (CUBC) made its most significant showing in decades at the regatta, held from 17-19 October in Boston, Massachusetts, USA. With six crews, including four openweight eights and two lightweight fours, CUBC fielded a line-up full of new talent as well as key returners.
It was a super Sunday for the Cambridge Men, who surged to victory in the Men’s Championship Eights with a blistering pace, securing back-to-back titles and sending a clear statement of intent about th
Cambridge University Boat Club (CUBC) made its most significant showing in decades at the regatta, held from 17-19 October in Boston, Massachusetts, USA. With six crews, including four openweight eights and two lightweight fours, CUBC fielded a line-up full of new talent as well as key returners.
It was a super Sunday for the Cambridge Men, who surged to victory in the Men’s Championship Eights with a blistering pace, securing back-to-back titles and sending a clear statement of intent about the programme’s ambitions.
Cox Sammy Houdaigui (Fitzwilliam) said: “We came to this race because CUBC’s goal, in addition to beating Oxford, is to be the best boat club in the world. And I don’t know if we’re the best boat club in the world, but I’m very proud of our actions today.”
The Cambridge Men’s ‘B’ crew also delivered an impressive performance, beating ‘A’ crews from Yale, Syracuse, Boston University, and Leander.
In the Women’s Championship Eights, the Cambridge Women delivered a strong performance to finish ninth, ahead of top-tier ‘A’ crews from Dartmouth, Boston University, Clemson, Princeton, and Duke. Cambridge’s ‘B’ crew also put in a solid performance, crossing the line in eighteenth place and outperforming ‘A’ crews from Fordham and Leander.
For the first time in many years, Cambridge also sent two lightweight fours, highlighting the Club’s commitment to developing competitive lightweight crews ready to take on top American programmes. The Men’s Lightweight Four delivered a strong performance, finishing fifth, just 21 seconds behind course record breakers, Harvard. Meanwhile, the Women’s Lightweight Four, featuring a full set of fresh faces, took seventh place, ahead of the ‘A’ boats from Wisconsin and Radcliffe.
Once a sperm has broken through to an egg cell in order to fertilise it, the two cells need to hold together tightly. This occurs via a type of protein binding that is among the strongest in biology – and it is also unique.
Once a sperm has broken through to an egg cell in order to fertilise it, the two cells need to hold together tightly. This occurs via a type of protein binding that is among the strongest in biology – and it is also unique.
This work of the Renaissance humanist Erasmus has had passages expurgated in heavy black ink. Over time, the corrosive ink has begun to eat holes in the paper.Photos by Stephanie Mitchell/Harvard Staff Photographer
Campus & Community
Looks like a book. Reads, to some, like a threat.
Houghton exhibit explores forbidden history
Anna Lamb
Harvard Staff Writer
October 17, 2025
4 min read
This work of the Renaissance humanist Erasmus has had passages expurgated in heavy black ink. Over time, the corrosive ink has begun to eat holes in the paper.
Photos by Stephanie Mitchell/Harvard Staff Photographer
Books about sex, science, and politics were among the works selected for “Banned in Boston (and Beyond),” a Houghton Library pop-up exhibition that coincided with the American Library Association’s Banned Books Week.
“I think you’ll find very few librarians for whom the freedom to read and the freedom of access to information isn’t a very important topic, and that’s a reason I really wanted to do something about this subject,” said John Overholt, who organized the exhibition. “Because it means a lot to me.”
Overholt, who is Houghton’s curator of early books and manuscripts, embraced the chance to explore the University’s extensive collection of previously banned books.
“I learned so much about the collections in the process of digging through HOLLIS and seeing what things I could find,” he said.
Curator John Overholt speaks with colleague Karintha Lowe.
Sex and substances were well represented in the exhibition, which included a copy of D.H. Lawrence’s 1928 novel “Lady Chatterley’s Lover.” Also among the titles were William Powell’s “The Anarchist Cookbook” (1971), a counterculture text filled with recipes for weapons and drugs, and Madonna’s 1992 coffee table book “Sex,” which hardly met a obscenity watchdog it did not provoke.
Madonna had nothing on Copernicus, of course. A copy of the Renaissance astronomer’s “De revolutionibus orbium coelestium” (1543) reminded viewers of an era when work questioning the Earth’s position at the center of the universe was banned by the Catholic Church.
The idea to put on a banned books exhibition came partly from Hannah Marcus, a professor of the history of science at Harvard, who teaches a course on Galileo — a scientist persecuted for his heliocentric ideas.
“There are different subjects that are particularly in the censorial gaze at different times,” Marcus said. “We’re seeing that in our present as well, right? Heliocentrism not a problem. ‘Lady Chatterley’s Lover’ less of a problem. And instead … it’s the fixations of a particular period.”
Some of the exhibited books showed clear signs of disapproval, including expurgation. Others were unmarked, but had been kept hidden or under lock and key.
British novelist Ernest Raymond was among those recruited to attest to the literary merit of “Lady Chatterley’s Lover” in the 1960 obscenity trial against Penguin Books.
Walt Whitman’s “Leaves of Grass” was banned not necessarily for being explicit, but rather for the ideas it contained.
A rare early copy of “Howl” by Allen Ginsberg clandestinely printed on a mimeograph machine at San Francisco State University. The version published later that year by City Lights Press was seized for obscenity by U.S. Customs.
Giovanni Boccaccio’s “Stories of Boccaccio,” for example, was kept for decades in the “Inferno” section of Harvard’s Widener Library. This restricted section was a set of stacks behind metal gates that housed books containing erotic material, as well as some valuable early editions.
“I wanted to have as wide a span as possible,” Overholt said. “To show the ongoing history there is [of banning books].”
Also included in the exhibition were texts about same-sex relationships, such as a collection of Walt Whitman poems and Radclyffe Hall’s 1928 novel “The Well of Loneliness.” These books, Overholt said, were banned not necessarily for being explicit, but rather for the ideas contained within.
“One thing I wanted to highlight is how innocuous some of this does seem in retrospect,” he said. “I don’t think reading ‘The Well of Loneliness’ made anyone a lesbian, and I don’t think preventing anyone from reading ‘The Well of Loneliness’ prevented anyone from being a lesbian.”
Ultimately, he added, that’s why censorship is often doomed to fail.
“Because the books have a lot of power, but they don’t necessarily have that power.”
The MIT School of Engineering welcomes new faculty members across six of its academic units. This new cohort of faculty members, who have recently started their roles at MIT, conduct research across a diverse range of disciplines.“We are thrilled to welcome these accomplished scholars to the School of Engineering,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor in the Department of Mechanical Engineering. “Each brings unique expertise across a wide range
The MIT School of Engineering welcomes new faculty members across six of its academic units. This new cohort of faculty members, who have recently started their roles at MIT, conduct research across a diverse range of disciplines.
“We are thrilled to welcome these accomplished scholars to the School of Engineering,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor in the Department of Mechanical Engineering. “Each brings unique expertise across a wide range of fields and is advancing knowledge with real-world impact. They all share a deep commitment to research excellence and a passion for teaching and mentorship.”
Faculty with appointments in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Data, Systems, and Society (IDSS) report into both the School of Engineering and the MIT Stephen A. Schwarzman College of Computing.
The new engineering faculty include:
Masha Folk joined the Department of Aeronautics and Astronautics as an assistant professor in July 2024 and is currently the Charles Stark Draper Career Development Professor. Her research focuses on sustainable aerospace technology driven by a deep desire to accelerate carbon-neutral aviation. She previously worked as an aerodynamics specialist for Rolls-Royce. Folk received her BS in aerospace engineering from Ohio State University, her MS in aerospace engineering from Purdue University, and her PhD in energy, fluids, and turbomachinery from the University of Cambridge.
Sophia Henneberg joined the Department of Nuclear Science and Engineering (NSE) as an assistant professor in September. Her research focuses on developing, utilizing, and extending optimization tools to identify new, promising stellarator designs, which are a promising path toward fusion energy. Previously, she was the principal investigator of EUROfusion’s Stellarator Optimization Theory, Simulation, Validation, and Verification group. Henneberg received a BS in physics at the Goethe-Universität, an MA in physics at the University of Wisconsin at Madison, and a PhD in physics at the University of York.
Omar Khattab joined the Department of Electrical Engineering and Computer Science as an assistant professor in July. He is also affiliated with the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research develops new algorithms and abstractions for declarative AI programming and for composing retrieval and reasoning. Khattab previously worked as a research scientist at Databricks. He received a BS in computer science from Carnegie Mellon University and a PhD in computer science from Stanford University.
Tania Lopez-Silva joined the Department of Materials Science and Engineering as an assistant professor in July. Her research focuses on supramolecular hydrogels — soft materials made from self-assembling molecules, primarily peptides. Previously, she served as a postdoc at the National Cancer Institute. Lopez-Silva earned her BS in chemistry from Tecnológico de Monterrey and her MA and PhD in chemistry from Rice University.
Ethan Peterson ’13 joined the Department of Nuclear Science and Engineering as an assistant professor in July 2024. His research focuses on improving radiation transport and transmutation methods for the design of fusion technologies, as well as whole-facility modeling for fusion power plants. Previously, he worked as a research scientist at MIT’s Plasma Science and Fusion Center. Peterson received his BS in nuclear engineering and physics from MIT and his PhD in plasma physics from the University of Wisconsin at Madison.
Dean Price joined the Department of Nuclear Science and Engineering as the Atlantic Richfield Career Development Professor in Energy Studies and an assistant professor in September. His work focuses on the simulation and control of advanced reactors, with expertise in uncertainty quantification, scientific machine learning, and artificial intelligence for nuclear applications. Previously, he was the Russell L. Heath Distinguished Postdoctoral Fellow at Idaho National Laboratory. He earned his BS in nuclear engineering from the University of Illinois and his PhD in nuclear engineering from the University of Michigan.
Daniel Varon joined the Department of Aeronautics and Astronautics as the Boeing Assistant Professor, holding an MIT Schwarzman College of Computing shared position with IDSS, in July. Varon’s research focuses on using satellite observations of atmospheric composition to better understand human impacts on the environment and identify opportunities to reduce them. Previously, he held a visiting postdoctoral fellowship at the Princeton School of Public and International Affairs. Varon earned a BS in physics and a BA in English literature from McGill University, and an MS in applied mathematics and PhD in atmospheric chemistry from Harvard University.
Raphael Zufferey joined the Department of Mechanical Engineering as an assistant professor in January. He studies bioinspired methods and unconventional designs to solve seamless aerial and aquatic locomotion for applications in ocean sciences. Zufferey previously worked as a Marie Curie postdoc at the École Polytechnique Fédérale de Lausanne (EPFL). He received his BA in micro-engineering and MS in robotics from EPFL and a PhD in robotics and aeronautics from Imperial College London.
The School of Engineering is also welcoming a number of faculty in the Department of EECS and the IDSS who hold shared positions with the MIT Schwarzman College of Computing and other departments. These include: Bailey Flanigan, Brian Hedden, Yunha Hwang, Benjamin Lindquist, Paris Smaragdis, Pu “Paul" Liang, Mariana Popescu, and Daniel Varon. For more information about these faculty members, read the Schwarzman College of Computing’s recent article.
Top row, left to right: Masha Folk, Sophia Henneberg, Omar Khattab, and Tania Lopez Silva. Bottom row, left to right: Ethan Peterson, Daniel Varon, Dean Price, and Raphael Zufferey.
The MIT Schwarzman College of Computing welcomes 11 new faculty members in core computing and shared positions to the MIT community. They bring varied backgrounds and expertise spanning sustainable design, satellite remote sensing, decision theory, and the development of new algorithms for declarative artificial intelligence programming, among others.“I warmly welcome this talented group of new faculty members. Their work lies at the forefront of computing and its broader impact in the world,” s
The MIT Schwarzman College of Computing welcomes 11 new faculty members in core computing and shared positions to the MIT community. They bring varied backgrounds and expertise spanning sustainable design, satellite remote sensing, decision theory, and the development of new algorithms for declarative artificial intelligence programming, among others.
“I warmly welcome this talented group of new faculty members. Their work lies at the forefront of computing and its broader impact in the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.
College faculty include those with appointments in the Department of Electrical Engineering and Computer Science (EECS) or in the Institute for Data, Systems, and Society (IDSS), which report into both the MIT Schwarzman College of Computing and the School of Engineering. There are also several new faculty members in shared positions between the college and other MIT departments and sections, including Political Science, Linguistics and Philosophy, History, and Architecture.
“Thanks to another successful year of collaborative searches, we have hired six additional faculty in shared positions, bringing the total to 20,” says Huttenlocher.
The new shared faculty include:
Bailey Flanigan is an assistant professor in the Department of Political Science, holding an MIT Schwarzman College of Computing shared position with EECS. Her research combines tools from social choice theory, game theory, algorithms, statistics, and survey methods to advance political methodology and strengthen democratic participation. She is interested in sampling algorithms, opinion measurement, and the design of democratic innovations like deliberative minipublics and participatory budgeting. Flanigan was a postdoc at Harvard University’s Data Science Initiative, and she earned her PhD in computer science from Carnegie Mellon University.
Brian HeddenPhD ’12 is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with EECS. His research focuses on how we ought to form beliefs and make decisions. His works span epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford University. He received his BA from Princeton University and his PhD from MIT, both in philosophy.
Yunha Hwangis an assistant professor in the Department of Biology, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Laboratory for Information and Decision Systems. Her research interests span machine learning for sustainable biomanufacturing, microbial evolution, and open science. She serves as the co-founder and chief scientist at Tatta Bio, a scientific nonprofit dedicated to advancing genomic AI for biological discovery. She holds a BS in computer science from Stanford University and a PhD in biology from Harvard University.
Ben Lindquist is an assistant professor in the History Section, holding an MIT Schwarzman College of Computing shared position with EECS. Through a historical lens, his work observes the ways that computing has circulated with ideas of religion, emotion, and divergent thinking. His book, “The Feeling Machine” (University of Chicago Press, forthcoming), follows the history of synthetic speech to examine how emotion became a subject of computer science. He was a postdoc in the Science in Human Culture Program at Northwestern University and earned his PhD in history from Princeton University.
Mariana Popescuis an assistant professor in the Department of Architecture, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).A computational architect and structural designer, Popescu has a strong interest and experience in innovative ways of approaching the fabrication process and use of materials in construction. Her area of expertise is computational and parametric design, with a focus on digital fabrication and sustainable design. Popescu earned her doctorate at ETH Zurich.
Paris SmaragdisSM ’97, PhD ’01is a professor in the Music and Theater Arts Section, holding an MIT Schwarzman College of Computing shared position with EECS. His research focus lies at the intersection of signal processing and machine learning, especially as it relates to sound and music. Prior to coming to MIT, he worked as a research scientist at Mitsubishi Electric Research Labs, a senior research scientist at Adobe Research, and an Amazon Scholar with Amazon’s AWS. He spent 15 years as a professor at the University of Illinois Urbana Champaign in the Computer Science Department, where he spearheaded the design of the CS+Music program, and served as an associate director of the School of Computer and Data Science. He holds a BMus from Berklee College of Music and earned his PhD in perceptual computing from MIT.
Daniel Varon is an assistant professor in the Department of Aeronautics and Astronautics, holding an MIT Schwarzman College of Computing shared position with IDSS. His work focuses on using satellite observations of atmospheric composition to better understand human impacts on the environment and identify opportunities to reduce them. An atmospheric scientist, Varon is particularly interested in greenhouse gasses, air pollution, and satellite remote sensing. He holds an MS in applied mathematics and a PhD in atmospheric chemistry, both from Harvard University.
In addition, the School of Engineering has adopted the shared faculty search model to hire its first shared faculty member:
Mark Rau is an assistant professor in the Music and Theater Arts Section, holding a School of Engineering shared position with EECS. He is involved in developing graduate programming focused on music technology. He has an interest in musical acoustics, vibration and acoustic measurement, audio signal processing, and physical modeling synthesis. His work focuses on musical instruments and creative audio effects. He holds an MA in music, science, and technology from Stanford, as well as a BS in physics and BMus in jazz from McGill University. He earned his PhD at Stanford’s Center for Computer Research in Music and Acoustics.
The new core faculty are:
Mitchell Gordon is an assistant professor in EECS. He is also a member of CSAIL. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. His work has won awards at conferences in human-computer interaction and artificial intelligence, including a best paper award at CHI and an Oral at NeurIPS. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.
Omar Khattab is an assistant professor in EECS. He is also a member of CSAIL. His work focuses on natural language processing, information retrieval, and AI systems. His research includes developing new algorithms and abstractions for declarative AI programming and for composing retrieval and reasoning. He received his BS from Carnegie Mellon University and his PhD from Stanford University, both in computer science.
Rachit Nigam will join EECS as an assistant professor in January 2026. He will also be a member of CSAIL and the Microsystems Technology Laboratories. He works on programming languages and computer architecture to address the design, verification, and usability challenges of specialized hardware. He was previously a visiting scholar at MIT. Nigam earned an MS and PhD in computer science from Cornell University.
Top row (left to right): Bailey Flanigan, Brian Hedden, Yunha Hwang, and Brian Lindquist. Second row (left to right): Mariana Popescu, Paris Smaragdis, Daniel Varon, and Mark Rau. Third row (left to right): Mitchell Gordon, Omar Khattab, and Rachit Nigam
For centuries, humans have sought to study the stars and celestial bodies, whether through observations made by naked eye or by telescopes on the ground and in space that can view the universe across nearly the entire electromagnetic spectrum. Each view unlocks new information about the denizens of space — X-ray pulsars, gamma-ray bursts — but one is still missing: the low-frequency radio sky.Researchers from MIT Lincoln Laboratory, the MIT Haystack Observatory, and Lowell Observatory are workin
For centuries, humans have sought to study the stars and celestial bodies, whether through observations made by naked eye or by telescopes on the ground and in space that can view the universe across nearly the entire electromagnetic spectrum. Each view unlocks new information about the denizens of space — X-ray pulsars, gamma-ray bursts — but one is still missing: the low-frequency radio sky.
Researchers from MIT Lincoln Laboratory, the MIT Haystack Observatory, and Lowell Observatory are working on a NASA-funded concept study called the Great Observatory for Long Wavelengths, or GO-LoW, that outlines a method to view the universe at as-of-yet unseen low frequencies using a constellation of thousands of small satellites. The wavelengths of these frequencies are 15 meters to several kilometers in length, which means they require a very big telescope in order to see clearly.
"GO-LoW will be a new kind of telescope, made up of many thousands of spacecraft that work together semi-autonomously, with limited input from Earth," says Mary Knapp, the principal investigator for GO-LoW at the MIT Haystack Observatory. "GO-LoW will allow humans to see the universe in a new light, opening up one of the very last frontiers in the electromagnetic spectrum."
The difficulty in viewing the low-frequency radio sky comes from Earth's ionosphere, a layer of the atmosphere that contains charged particles that prevent very low-frequency radio waves from passing through. Therefore, a space-based instrument is required to observe these wavelengths. Another challenge is that long-wavelength observations require correspondingly large telescopes, which would need to be many kilometers in length if built using traditional dish antenna designs. GO-LoW will use interferometry — a technique that combines signals from many spatially separated receivers that, when put together, will function as one large telescope — to obtain highly detailed data from exoplanets and other sources in space. A similar technique was used to make the first image of a black hole and, more recently, an image of the first known extrasolar radiation belts.
Melodie Kao, a member of the team from Lowell Observatory, says the data could reveal details about an exoplanet's makeup and potential for life. "[The radio wave aurora around an exoplanet] carries important information, such as whether or not the planet has a magnetic field, how strong it is, how fast the planet is rotating, and even hints about what's inside," she says. "Studying exoplanet radio aurorae and the magnetic fields that they trace is an important piece of the habitability puzzle, and it's a key science goal for GO-LoW."
Several recent trends and technology developments will make GO-LoW possible in the near future, such as the declining cost of mass-produced small satellites, the rise of mega-constellations, and the return of large, high-capacity launch vehicles like NASA's Space Launch System. Go-LoW would be the first mega-constellation that uses interferometry for scientific purposes.
The GO-LoW constellation will be built through several successive launches, each containing thousands of spacecraft. Once they reach low-Earth orbit, the spacecraft will be refueled before journeying on to their final destination — an Earth-sun Lagrange point where they will then be deployed. Lagrange points are regions in space where the gravitational forces of two large celestial bodies (like the sun and Earth) are in equilibrium, such that a spacecraft requires minimal fuel to maintain its position relative to the two larger bodies. At this long distance from Earth (1 astronomical unit, or approximately 93 million miles), there will also be much less radio-frequency interference that would otherwise obscure GO-LoW’s sensitive measurements.
"GO-LoW will have a hierarchical architecture consisting of thousands of small listener nodes and a smaller number of larger communication and computation nodes (CCNs)," says Kat Kononov, a team member from Lincoln Laboratory's Applied Space Systems Group, who has been working with MIT Haystack staff since 2020, with Knapp serving as her mentor during graduate school. A node refers to an individual small satellite within the constellation. "The listener nodes are small, relatively simple 3U CubeSats — about the size of a loaf of bread — that collect data with their low-frequency antennas, store it in memory, and periodically send it to their communication and computation node via a radio link." In comparison, the CCNs are about the size of a mini-fridge.
The CCN will keep track of the positions of the listener nodes in their neighborhood; collect and reduce the data from their respective listener nodes (around 100 of them); and then transmit that data back to Earth, where more intensive data processing can be performed.
At full strength, with approximately 100,000 listener nodes, the GO-LoW constellation should be able to see exoplanets with magnetic fields in the solar neighborhood — within 5 to 10 parsecs — many for the very first time.
The GO-LoW research team recently published the results of their findings from Phase I of the study, which identified a type of advanced antenna called a vector sensor as the best type for this application. In 2024, Lincoln Laboratory designed a compact deployable version of the sensor suitable for use in space.
The team is now working on Phase II of the program, which is to build a multi-agent simulation of constellation operations.
"What we learned during the Phase I study is that the hard part for GO-LoW is not any specific technology … the hard part is the system: the system engineering and the autonomy to run the system," says Knapp. "So, how do we build this constellation such that it's a tractable problem? That's what we’re exploring in this next part of the study."
GO-LoW is one of many civil space programs at Lincoln Laboratory that aim to harness advanced technologies originally developed for national security to enable new space missions that support science and society. "By adapting these capabilities to serve new stakeholders, the laboratory helps open novel frontiers of discovery while building resilient, cost-effective systems that benefit the nation and the world," says Laura Kennedy, who is the deputy lead of Lincoln Laboratory's Civil Space Systems and Technology Office.
"Like landing on the moon in 1969, or launching Hubble in the 1990s, GO-LoW is envisioned to let us see something we've never seen before and generate scientific breakthroughs," says Kononov.
Go-LoW is a collaboration between Lincoln Laboratory, Haystack Observatory, and Lowell University, as well as Lenny Paritsky from LeafLabs and Jacob Turner from Cornell University.
It’s hard to keep up with the ever-changing trends of the fashion world. What’s “in” one minute is often out of style the next season, potentially causing you to re-evaluate your wardrobe.Staying current with the latest fashion styles can be wasteful and expensive, though. Roughly 92 million tons of textile waste are produced annually, including the clothes we discard when they go out of style or no longer fit. But what if we could simply reassemble our clothes into whatever outfits we wanted, a
It’s hard to keep up with the ever-changing trends of the fashion world. What’s “in” one minute is often out of style the next season, potentially causing you to re-evaluate your wardrobe.
Staying current with the latest fashion styles can be wasteful and expensive, though. Roughly 92 million tons of textile waste are produced annually, including the clothes we discard when they go out of style or no longer fit. But what if we could simply reassemble our clothes into whatever outfits we wanted, adapting to trends and the ways our bodies change?
A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Adobe are attempting to bring eco-friendly, versatile garments to life. Their new “Refashion” software system breaks down fashion design into modules — essentially, smaller building blocks — by allowing users to draw, plan, and visualize each element of a clothing item. The tool turns fashion ideas into a blueprint that outlines how to assemble each component into reconfigurable clothing, such as a pair of pants that can be transformed into a dress.
With Refashion, users simply draw shapes and place them together to develop an outline for adaptable fashion pieces. It’s a visual diagram that shows how to cut garments, providing a straightforward way to design things like a shirt with an attachable hood for rainy days. One could also create a skirt that can then be reconfigured into a dress for a formal dinner, or maternity wear that fits during different stages of pregnancy.
“We wanted to create garments that consider reuse from the start,” says Rebecca Lin, MIT Department of Electrical Engineering and Computer Science (EECS) PhD student, CSAIL and Media Lab researcher, and lead author on a paper presenting the project. “Most clothes you buy today are static, and are discarded when you no longer want them. Refashion instead makes the most of our garments by helping us design items that can be easily resized, repaired, or restyled into different outfits.”
Modules à la mode
The researchers conducted a preliminary user study where both designers and novices explored Refashion and were able to create garment prototypes. Participants assembled pieces such as an asymmetric top that could be extended into a jumpsuit, or remade into a formal dress, often within 30 minutes. These results suggest that Refashion has the potential to make prototyping garments more approachable and efficient. But what features might contribute to this ease of use?
Its interface first presents a simple grid in its “Pattern Editor” mode, where users can connect dots to outline the boundaries of a clothing item. It’s essentially drawing rectangular panels and specifying how different modules will connect to each other.
Users can customize the shape of each component, create a straight design for garments (which might be useful for less form-fitting items, like chinos) or perhaps tinkering with one of Refashion’s templates. A user can edit pre-designed blueprints for things like a T-shirt, fitted blouse, or trousers.
Another, more creative route is to change the design of individual modules. One can choose the “pleat” feature to fold a garment over itself, similar to an accordion, for starters. It’s a useful way to design something like a maxi dress. The “gather” option adds an artsy flourish, where a garment is crumpled together to create puffy skirts or sleeves. A user might even go with the “dart” module, which removes a triangular piece from the fabric. It allows for shaping a garment at the waist (perhaps for a pencil skirt) or tailor to the upper body (fitted shirts, for instance).
While it might seem that each of these components needs to be sewn together, Refashion enables users to connect garments through more flexible, efficient means. Edges can be seamed together via double-sided connectors such as metal snaps (like the buttons used to close a denim jacket) or Velcro dots. A user could also fasten them in pins called brads, which have a pointed side that they stick through a hole and split into two “legs” to attach to another surface; it’s a handy way to secure, say, a picture on a poster board. Both connective methods make it easy to reconfigure modules, should they be damaged or a “fit check” calls for a new look.
As a user designs their clothing piece, the system automatically creates a simplified diagram of how it can be assembled. The pattern is divided into numbered blocks, which is dragged onto different parts of a 2D mannequin to specify the position of each component. The user can then simulate how their sustainable clothing will look on 3D models of a range of body types (one can also upload a model).
Finally, a digital blueprint for sustainable clothing can extend, shorten, or combine with other pieces. Thanks to Refashion, a new piece could be emblematic of a potential shift in fashion: Instead of buying new clothes every time we want a new outfit, we can simply reconfigure existing ones. Yesterday’s scarf could be today’s hat, and today’s T-shirt could be tomorrow’s jacket.
“Rebecca’s work is at an exciting intersection between computation and art, craft, and design,” says MIT EECS professor and CSAIL principal investigator Erik Demaine, who advises Lin. “I’m excited to see how Refashion can make custom fashion design accessible to the wearer, while also making clothes more reusable and sustainable.”
Constant change
While Refashion presents a greener vision for the future of fashion, the researchers note that they’re actively improving the system. They intend to revise the interface to support more durable items, stepping beyond standard prototyping fabrics. Refashion may soon support other modules, like curved panels, as well. The CSAIL-Adobe team may also evaluate whether their system can use as few materials as possible to minimize waste, and whether it can help “remix” old store-bought outfits.
Lin also plans to develop new computational tools that help designers create unique, personalized outfits using colors and textures. She’s exploring how to design clothing by patchwork — essentially, cutting out small pieces from materials like decorative fabrics, recycled denim, and crochet blocks and assembling them into a larger item.
“This is a great example of how computer-aided design can also be key in supporting more sustainable practices in the fashion industry,” says Adrien Bousseau, a senior researcher at Inria Centre at Université Côte d'Azur who wasn’t involved in the paper. “By promoting garment alteration from the ground up, they developed a novel design interface and accompanying optimization algorithm that helps designers create garments that can undergo a longer lifetime through reconfiguration. While sustainability often imposes additional constraints on industrial production, I am confident that research like the one by Lin and her colleagues will empower designers in innovating despite these constraints.”
Lin wrote the paper with Adobe Research scientists Michal Lukáč and Mackenzie Leake, who is the paper’s senior author and a former CSAIL postdoc. Their work was supported, in part, by the MIT Morningside Academy for Design, an MIT MAKE Design-2-Making Mini-Grant, and the Natural Sciences and Engineering Research Council of Canada. The researchers presented their work recently at the ACM Symposium on User Interface Software and Technology.
With Refashion, users simply draw shapes and place them together to develop an outline for adaptable fashion pieces. It’s a visual diagram that demonstrates how to cut garments, providing a straightforward way to design things like pants that can be reconfigured into a dress.
It’s hard to keep up with the ever-changing trends of the fashion world. What’s “in” one minute is often out of style the next season, potentially causing you to re-evaluate your wardrobe.Staying current with the latest fashion styles can be wasteful and expensive, though. Roughly 92 million tons of textile waste are produced annually, including the clothes we discard when they go out of style or no longer fit. But what if we could simply reassemble our clothes into whatever outfits we wanted, a
It’s hard to keep up with the ever-changing trends of the fashion world. What’s “in” one minute is often out of style the next season, potentially causing you to re-evaluate your wardrobe.
Staying current with the latest fashion styles can be wasteful and expensive, though. Roughly 92 million tons of textile waste are produced annually, including the clothes we discard when they go out of style or no longer fit. But what if we could simply reassemble our clothes into whatever outfits we wanted, adapting to trends and the ways our bodies change?
A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Adobe are attempting to bring eco-friendly, versatile garments to life. Their new “Refashion” software system breaks down fashion design into modules — essentially, smaller building blocks — by allowing users to draw, plan, and visualize each element of a clothing item. The tool turns fashion ideas into a blueprint that outlines how to assemble each component into reconfigurable clothing, such as a pair of pants that can be transformed into a dress.
With Refashion, users simply draw shapes and place them together to develop an outline for adaptable fashion pieces. It’s a visual diagram that shows how to cut garments, providing a straightforward way to design things like a shirt with an attachable hood for rainy days. One could also create a skirt that can then be reconfigured into a dress for a formal dinner, or maternity wear that fits during different stages of pregnancy.
“We wanted to create garments that consider reuse from the start,” says Rebecca Lin, MIT Department of Electrical Engineering and Computer Science (EECS) PhD student, CSAIL and Media Lab researcher, and lead author on a paper presenting the project. “Most clothes you buy today are static, and are discarded when you no longer want them. Refashion instead makes the most of our garments by helping us design items that can be easily resized, repaired, or restyled into different outfits.”
Modules à la mode
The researchers conducted a preliminary user study where both designers and novices explored Refashion and were able to create garment prototypes. Participants assembled pieces such as an asymmetric top that could be extended into a jumpsuit, or remade into a formal dress, often within 30 minutes. These results suggest that Refashion has the potential to make prototyping garments more approachable and efficient. But what features might contribute to this ease of use?
Its interface first presents a simple grid in its “Pattern Editor” mode, where users can connect dots to outline the boundaries of a clothing item. It’s essentially drawing rectangular panels and specifying how different modules will connect to each other.
Users can customize the shape of each component, create a straight design for garments (which might be useful for less form-fitting items, like chinos) or perhaps tinkering with one of Refashion’s templates. A user can edit pre-designed blueprints for things like a T-shirt, fitted blouse, or trousers.
Another, more creative route is to change the design of individual modules. One can choose the “pleat” feature to fold a garment over itself, similar to an accordion, for starters. It’s a useful way to design something like a maxi dress. The “gather” option adds an artsy flourish, where a garment is crumpled together to create puffy skirts or sleeves. A user might even go with the “dart” module, which removes a triangular piece from the fabric. It allows for shaping a garment at the waist (perhaps for a pencil skirt) or tailor to the upper body (fitted shirts, for instance).
While it might seem that each of these components needs to be sewn together, Refashion enables users to connect garments through more flexible, efficient means. Edges can be seamed together via double-sided connectors such as metal snaps (like the buttons used to close a denim jacket) or Velcro dots. A user could also fasten them in pins called brads, which have a pointed side that they stick through a hole and split into two “legs” to attach to another surface; it’s a handy way to secure, say, a picture on a poster board. Both connective methods make it easy to reconfigure modules, should they be damaged or a “fit check” calls for a new look.
As a user designs their clothing piece, the system automatically creates a simplified diagram of how it can be assembled. The pattern is divided into numbered blocks, which is dragged onto different parts of a 2D mannequin to specify the position of each component. The user can then simulate how their sustainable clothing will look on 3D models of a range of body types (one can also upload a model).
Finally, a digital blueprint for sustainable clothing can extend, shorten, or combine with other pieces. Thanks to Refashion, a new piece could be emblematic of a potential shift in fashion: Instead of buying new clothes every time we want a new outfit, we can simply reconfigure existing ones. Yesterday’s scarf could be today’s hat, and today’s T-shirt could be tomorrow’s jacket.
“Rebecca’s work is at an exciting intersection between computation and art, craft, and design,” says MIT EECS professor and CSAIL principal investigator Erik Demaine, who advises Lin. “I’m excited to see how Refashion can make custom fashion design accessible to the wearer, while also making clothes more reusable and sustainable.”
Constant change
While Refashion presents a greener vision for the future of fashion, the researchers note that they’re actively improving the system. They intend to revise the interface to support more durable items, stepping beyond standard prototyping fabrics. Refashion may soon support other modules, like curved panels, as well. The CSAIL-Adobe team may also evaluate whether their system can use as few materials as possible to minimize waste, and whether it can help “remix” old store-bought outfits.
Lin also plans to develop new computational tools that help designers create unique, personalized outfits using colors and textures. She’s exploring how to design clothing by patchwork — essentially, cutting out small pieces from materials like decorative fabrics, recycled denim, and crochet blocks and assembling them into a larger item.
“This is a great example of how computer-aided design can also be key in supporting more sustainable practices in the fashion industry,” says Adrien Bousseau, a senior researcher at Inria Centre at Université Côte d'Azur who wasn’t involved in the paper. “By promoting garment alteration from the ground up, they developed a novel design interface and accompanying optimization algorithm that helps designers create garments that can undergo a longer lifetime through reconfiguration. While sustainability often imposes additional constraints on industrial production, I am confident that research like the one by Lin and her colleagues will empower designers in innovating despite these constraints.”
Lin wrote the paper with Adobe Research scientists Michal Lukáč and Mackenzie Leake, who is the paper’s senior author and a former CSAIL postdoc. Their work was supported, in part, by the MIT Morningside Academy for Design, an MIT MAKE Design-2-Making Mini-Grant, and the Natural Sciences and Engineering Research Council of Canada. The researchers presented their work recently at the ACM Symposium on User Interface Software and Technology.
With Refashion, users simply draw shapes and place them together to develop an outline for adaptable fashion pieces. It’s a visual diagram that demonstrates how to cut garments, providing a straightforward way to design things like pants that can be reconfigured into a dress.
Science & Tech
Her science writing is not for the squeamish
It takes a lot to gross out ‘Replaceable You’ author Mary Roach
Alvin Powell
Harvard Staff Writer
October 17, 2025
5 min read
Mary Roach.Niles Singer/Harvard Staff Photographer
It is hard to gross out Mary Roach, but not impossible.
The science writer’s books have explored uncomfortable topics ranging from the afterlife of
It takes a lot to gross out ‘Replaceable You’ author Mary Roach
Alvin Powell
Harvard Staff Writer
5 min read
Mary Roach.
Niles Singer/Harvard Staff Photographer
It is hard to gross out Mary Roach, but not impossible.
The science writer’s books have explored uncomfortable topics ranging from the afterlife of cadavers to the physiology of sex to the “alimentary canal” running from your mouth to your anus. She once visited a “body farm” in Tennessee where corpses are left to decompose to provide a time standard to determine time of death in murder and other cases.
“That was tough because it was visual and also olfactory, and at one point the researcher said, ‘If you put your ear really close, you can hear the maggots feeding,’” said Roach in a recent talk at Harvard’s Science Center. That maggot fest didn’t do it, though, in part because Roach said her fascination with her topics often outweighs her disgust or horror.
But then her home county medical examiner stepped to the plate. That official, Roach said, “made it a personal goal to gross me out.”
“And she did, one day, succeed. She just said, ‘Oh, you should come in this morning, there’s an interesting case.’ And I don’t need to describe that.”
It was a rare instance of Roach leaving things to the audience’s imagination, something her books — including her latest, “Replaceable You: Adventures in Human Anatomy,” published in September — don’t tend to do. Roach spoke to a packed lecture hall, responding to questions from fellow science writer Elizabeth Preston and members of the audience. Roach proved as engaging and taboo-breaking in person as she is on the page.
In the talk, sponsored by the Harvard Book Store, the Faculty of Arts and Sciences’ Division of Science, and Harvard Library, Roach said she starts with a broad topic of interest to her and then goes by instinct. When she hears of an interesting and illuminating fact, a finding, or some other wrinkle, she emails the researchers — she spends a lot of time emailing strangers, she said — and visits the site to see for herself. Sometimes those trips are a bust — not as interesting as she expected, or without the access she envisioned. She told of volunteering for an experiment that she thought would lead to a compelling article about how airplane seat ergonomics are tested. When she arrived, she was told she had to sit in the seat for eight hours.
“I sold it to this magazine and then I got there and it’s like, ‘You’re going to be sitting in this chair for eight hours.’ And that’s what it was,” Roach said. “I don’t think it was my most interesting piece.”
Usually, however, Roach said she can take away something from a trip and often gets more than she anticipated: “You don’t know until you get there.”
Her humorous approach to sometimes serious subjects involves walking a fine line, however, and she admitted she sometimes wobbles. That’s where her editors come in, highlighting when her intended light touch becomes disrespectful, or when her treatment of a subject is simply too much.
Her most recent book, “Replaceable You,” explores human efforts to repair and replace parts of the body. The book takes the reader through the early fumbling to sophisticated modern efforts at prosthetic limbs, joints, eyeballs, reproductive organs, and skin grafts. She describes an early effort at a skin graft where the skin remained attached to the donor animal — a dog in this case — in order to ensure blood flow and the graft’s survival on its human host. The account of the experiment, done by a French doctor in the 1800s, was in French, and when Roach saw that it involved a living dog, she assumed it was something small and portable.
Her most recent book, “Replaceable You,” explores human efforts to repair and replace parts of the body.
“It said ‘un chien danois’ and I thought, ‘Oh, Danish dog breeds.’ I’m not that familiar with Danish dog breeds but I’m picturing quite a small one because it’s going to be attached,” Roach said. “But, in fact, no. It was a Great Dane — and he complained about ‘mouvement continuelle excessif’ (excessive continual movement). And how do I — just because of the image — how do I not?”
Over the course of working on the project, Roach said she became convinced that the body is a marvel of evolutionary engineering, with each organ and part adapted nearly perfectly to its purpose. Human efforts to replicate them almost always fall short, she said.
“I did realize early on that the human body is such a miraculous machine and to think that you could create any component of it as good as what we’re born with, even something that is malfunctioning, and you could come up with a replacement,” Roach said. “There was a point where I thought I need to change the title. … Technically it should be ‘Irreplaceable.’”
Science & Tech
Over 60 and online
In new book, law professor busts myths about ‘hapless grandparents’ in the digital age
Sy Boles
Harvard Staff Writer
October 17, 2025
7 min read
The fastest-growing demographic of internet users is people age 60 and older, but the group’s behavior online is poorly understood — and often stereotyped.
That’s according to John Palfrey, former execut
In new book, law professor busts myths about ‘hapless grandparents’ in the digital age
Sy Boles
Harvard Staff Writer
7 min read
The fastest-growing demographic of internet users is people age 60 and older, but the group’s behavior online is poorly understood — and often stereotyped.
That’s according to John Palfrey, former executive director of the Berkman Klein Center for Internet & Society and a visiting professor at Harvard Law School.
In a new book called “Wired Wisdom: How to Age Better Online,” co-authored with the University of Zurich’s Eszter Hargittai, Palfrey busts common myths about how older adults relate to privacy, security, and connection in the digital age.
“Too often we have the image in our mind of a hapless grandparent or older person in our life who can’t turn on the new phone they’ve received or they can’t fix the blinking light on the VCR,” said Palfrey, who is now the president of the John D. and Catherine T. MacArthur Foundation. “What we really wanted to do with this book is help make sure that the use of technology is actually a part of thriving in older age, and not something that’s a hardship for older people.”
In this interview with the Gazette, which has been edited for length and clarity, Palfrey explains what we get wrong about the online lives of over-60s, and what can be done to support older users of new tech.
What made you want to write about older adults’ online lives?
When I was at the Berkman Klein Center in the aughts, the early days of the internet, I co-wrote a book with Urs Gasser, “Born Digital: How Children Grow Up in a Digital Age.” I had two very small kids at the time, and there were real concerns about how kids use technology. I was thinking through, how do you raise kids in this time?
This book came about in a similar way. I have parents who are both roughly 80. They’re both wonderful and brilliant people, and sometimes technology is their friend, and sometimes it’s not. I thought, how do I engage as a middle-aged person in supporting older adults? What do the data tell us about the most useful things to do?
The book draws on external data, but you also contribute a new survey of 4,000 adults over 60. What stuck out to you from that research?
Many of the myths we hold are not entirely true, and I’ll give you one example that has to do with safety and security. We think about older people as totally helpless in that they’re constantly being scammed and so forth. But it turns out, when you dig into the data, that’s not the case. Older adults are more skeptical than younger people about scams, not less. In fact, they’re able to use their history and their life experience in a very positive way to apply themselves to these online scams.
What’s going on in the data is that older people are targeted way more than younger people. Why? Because they likely have more money, they may be less good with the technology itself, and sometimes they’re suffering from cognitive decline.
So the truth is not that older adults are completely helpless against scams. It’s that they’re facing way worse conditions, and we don’t give them the same kind of training and support that we give younger people. And some of them are sitting ducks as a result.
If we treat older adults like they don’t know anything, that’s not a very good starting point for figuring out the right interventions, whether it’s changing the design of the technology, creating new laws, or intervening as family members.
“Older adults are more skeptical than younger people about scams, not less. ”
John Palfrey.
Photo courtesy of the MacArthur Foundation
But you did find some real differences between the generations in terms of online privacy: Older adults tend to be much more skeptical about sharing personal information, whether that’s on social media or through medical alert systems that could be lifesaving. What advice do you have for navigating those conversations?
That’s right. For younger generations, everything in their life has been recorded, from their sonogram when their parent was pregnant to their baby pictures to their Little League games, and all of that is stored digitally. That’s just not true for their grandparents. There’s a big distinction coming in, and in their final chapters, older adults are having to grapple with that.
For me, that means we really shouldn’t assume too much about what people know about technology or what they will intuit, especially if they’ve been out of the workforce for a long time. They might not know how much data about them is being held by other people.
So is it really about talking to the older adults in your life and figuring out their specific needs and concerns?
Yes, I think so. This is one of the challenges: There’s a huge diversity within this age cohort, which shouldn’t be that surprising. At the MacArthur Foundation, where I work now, we have people in their 70s who do quarterly cybersecurity trainings — who are teaching the cybersecurity trainings. They just know it incredibly well. And then there are people who are exactly the same age, who haven’t been in the workforce at any point in the digital era, who know absolutely nothing about this stuff, who have no context for it.
How are older adults reckoning with the contradictions of social media — that it can both connect us and isolate us, inform us and misinform us?
It’s very much a mixed bag on both. That’s the reality for kids, too, and for those of us in the middle. Too much of it can be really harmful, but a decent dose can be a part of a positive life. To give an example, if you’re an older person and you live in Europe, and you can use WhatsApp to connect with a relative in the U.S., it can be life-giving to be able to send them texts and see their face once in a while. Having a healthy dose of social media can be a part of a strong social life.
What can be done to improve things?
The most important thing we could do is design technologies with older people in mind. Have you ever heard of older people being involved in the design of a new technology at, say, Apple? I don’t think that kind of thing is at the top of mind for those designing and carrying out these technologies. Generally speaking, new technologies are designed by quite young people for other young people, and we do little thinking about the experiences of older people. It could be as simple as font size.
We also focus on support. Great benefits can come from a grandchild connecting with their grandparent through the use of technology, helping them navigate that. The child can help the grandparent, but you also never know what skills and insights might go in the other direction. The grandchild is going to learn a whole pile of stuff from the grandparent in the process.
I really hope that as a society we focus more on treating our older adults better, really centering them in ways we don’t always today. You’ll hear people say that when we center people with disabilities in the way we build cities and buildings, we end up improving things for everybody. The same is going to be true for technology design: It’s going to become more universal as we design for older people. Particularly as artificial intelligence becomes more and more central, we urgently need to pay attention to how that’s going to affect the older population. We’ll benefit in untold ways if we do so.
Before cells can divide, they first need to replicate all of their chromosomes, so that each of the daughter cells can receive a full set of genetic material. Until now, scientists had believed that as division occurs, the genome loses the distinctive 3D internal structure that it typically forms.Once division is complete, it was thought, the genome gradually regains that complex, globular structure, which plays an essential role in controlling which genes are turned on in a given cell.However,
Before cells can divide, they first need to replicate all of their chromosomes, so that each of the daughter cells can receive a full set of genetic material. Until now, scientists had believed that as division occurs, the genome loses the distinctive 3D internal structure that it typically forms.
Once division is complete, it was thought, the genome gradually regains that complex, globular structure, which plays an essential role in controlling which genes are turned on in a given cell.
However, a new study from MIT shows that in fact, this picture is not fully accurate. Using a higher-resolution genome mapping technique, the research team discovered that small 3D loops connecting regulatory elements and genes persist in the genome during cell division, or mitosis.
“This study really helps to clarify how we should think about mitosis. In the past, mitosis was thought of as a blank slate, with no transcription and no structure related to gene activity. And we now know that that’s not quite the case,” says Anders Sejr Hansen, an associate professor of biological engineering at MIT. “What we see is that there’s always structure. It never goes away.”
The researchers also discovered that these regulatory loops appear to strengthen when chromosomes become more compact in preparation for cell division. This compaction brings genetic regulatory elements closer together and encourages them to stick together. This may help cells “remember” interactions present in one cell cycle and carry it to the next one.
“The findings help to bridge the structure of the genome to its function in managing how genes are turned on and off, which has been an outstanding challenge in the field for decades,” says Viraat Goel PhD ’25, the lead author of the study.
Hansen and Edward Banigan, a research scientist in MIT’s Institute for Medical Engineering and Science, are the senior authors of the paper, which appears today in Nature Structural and Molecular Biology. Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, and Gerd Blobel, a professor at the Perelman School of Medicine at the University of Pennsylvania, are also authors of the study.
A surprising finding
Over the past 20 years, scientists have discovered that inside the cell nucleus, DNA organizes itself into 3D loops. While many loops enable interactions between genes and regulatory regions that may be millions of base pairs away from each other, others are formed during cell division to compact chromosomes. Much of the mapping of these 3D structures has been done using a technique called Hi-C, originally developed by a team that included MIT researchers and was led by Job Dekker at the University of Massachusetts Chan Medical School. To perform Hi-C, researchers use enzymes to chop the genome into many small pieces and biochemically link pieces that are near each other in 3D space within the cell’s nucleus. They then determine the identities of the interacting pieces by sequencing them.
However, that technique doesn’t have high enough resolution to pick out all specific interactions between genes and regulatory elements such as enhancers. Enhancers are short sequences of DNA that can help to activate the transcription of a gene by binding to the gene’s promoter — the site where transcription begins.
In 2023, Hansen and others developed a new technique that allows them to analyze 3D genome structures with 100 to 1,000 times greater resolution than was previously possible. This technique, known as Region-Capture Micro-C (RC-MC), uses a different enzyme that cuts the genome into small fragments of similar size. It also focuses on a smaller segment of the genome, allowing for high-resolution 3-D mapping of a targeted genome region.
Using this technique, the researchers were able to identify a new kind of genome structure that hadn’t been seen before, which they called “microcompartments.” These are tiny highly connected loops that form when enhancers and promoters located near each other stick together.
In that paper, experiments revealed that these loops were not formed by the same mechanisms that form other genome structures, but the researchers were unable to determine exactly how they do form. In hopes of answering that question, the team set out to study cells as they undergo cell division. During mitosis, chromosomes become much more compact, so that they can be duplicated, sorted, and divvied up between two daughter cells. As this happens, larger genome structures called A/B compartments and topologically associating domains (TADs) disappear completely.
The researchers believed that the microcompartments they had discovered would also disappear during mitosis. By tracking cells through the entire cell division process, they hoped to learn how the microcompartments appear after mitosis is completed.
“During mitosis, it has been thought that almost all gene transcription is shut off. And before our paper, it was also thought that all 3D structure related to gene regulation was lost and replaced by compaction. It’s a complete reset every cell cycle,” Hansen says.
However, to their surprise, the researchers found that microcompartments could still be seen during mitosis, and in fact they become more prominent as the cell goes through cell division.
“We went into this study thinking, well, the one thing we know for sure is that there’s no regulatory structure in mitosis, and then we accidentally found structure in mitosis,” Hansen says.
Using their technique, the researchers also confirmed that larger structures such as A/B compartments and TADs do disappear during mitosis, as had been seen before.
“This study leverages the unprecedented genomic resolution of the RC-MC assay to reveal new and surprising aspects of mitotic chromatin organization, which we have overlooked in the past using traditional 3C-based assays. The authors reveal that, contrary to the well-described dramatic loss of TADs and compartmentalization during mitosis, fine-scale “microcompartments” — nested interactions between active regulatory elements — are maintained or even transiently strengthened,” says Effie Apostolou, an associate professor of molecular biology in medicine at Weill Cornell Medicine, who was not involved in the study.
A spike in transcription
The findings may offer an explanation for a spike in gene transcription that usually occurs near the end of mitosis, the researchers say. Since the 1960s, it had been thought that transcription ceased completely during mitosis, but in 2016 and 2017, a few studies showed that cells undergo a brief spike of transcription, which is quickly suppressed until the cell finishes dividing.
In their new study, the MIT team found that during mitosis, microcompartments are more likely to be found near the genes that spike during cell division. They also discovered that these loops appear to form as a result of the genome compaction that occurs during mitosis. This compaction brings enhancers and promoters closer together, allowing them to stick together to form microcompartments.
Once formed, the loops that constitute microcompartments may activate gene transcription somewhat by accident, which is then shut off by the cell. When the cell finishes dividing, entering a state known as G1, many of these small loops become weaker or disappear.
“It almost seems like this transcriptional spiking in mitosis is an undesirable accident that arises from generating a uniquely favorable environment for microcompartments to form during mitosis,” Hansen says. “Then, the cell quickly prunes and filters many of those loops out when it enters G1.”
Because chromosome compaction can also be influenced by a cell’s size and shape, the researchers are now exploring how variations in those features affect the structure of the genome and in turn, gene regulation.
“We are thinking about some natural biological settings where cells change shape and size, and whether we can perhaps explain some 3D genome changes that previously lack an explanation,” Hansen says. “Another key question is how does the cell then pick what are the microcompartments to keep and what are the microcompartments to remove when you enter G1, to ensure fidelity of gene expression?”
The research was funded in part by the National Institutes of Health, a National Science Foundation CAREER Award, the Gene Regulation Observatory of the Broad Institute, a Pew-Steward Scholar Award for Cancer Research, the Mathers Foundation, the MIT Westaway Fund, the Bridge Project of the Koch Institute and Dana-Farber/Harvard Cancer Center, and the Koch Institute Support (core) Grant from the National Cancer Institute.
MIT experiments have revealed the existence of “microcompartments,” shown in yellow, within the 3D structure of the genome. These compartments are formed by tiny loops that may play a role in gene regulation.
A glass plate, a delicate tube and an oil bath are all that is required: thanks to a new method, researchers at ETH Zurich can produce tens of thousands of tiny droplets within minutes. This enables them to test enzymes and active ingredients faster, more precisely and in a more resource-efficient manner than previously.
A glass plate, a delicate tube and an oil bath are all that is required: thanks to a new method, researchers at ETH Zurich can produce tens of thousands of tiny droplets within minutes. This enables them to test enzymes and active ingredients faster, more precisely and in a more resource-efficient manner than previously.
NUS celebrated the official reopening of Yusof Ishak House (YIH) on 15 October 2025, following a three-year rejuvenation. Graced by Guest-of-Honour Dr Vivian Balakrishnan, Minister for Foreign Affairs and NUS alumnus, the reopening carried the theme “From Here, You Radiate” — a fitting reflection of YIH’s refreshed role as the epicentre of student life.Organised by the NUS Office of Student Affairs, the day-long event drew about 2,500 attendees, comprising students, staff, faculty, alumni and gu
NUS celebrated the official reopening of Yusof Ishak House (YIH) on 15 October 2025, following a three-year rejuvenation. Graced by Guest-of-Honour Dr Vivian Balakrishnan, Minister for Foreign Affairs and NUS alumnus, the reopening carried the theme “From Here, You Radiate” — a fitting reflection of YIH’s refreshed role as the epicentre of student life.
Organised by the NUS Office of Student Affairs, the day-long event drew about 2,500 attendees, comprising students, staff, faculty, alumni and guests. Held in conjunction with WellNUS Festival 2025, the event was part of the NUS120 celebrations.
Bringing fresh energy to the revitalised space, 17 student groups came together to organise over 28 activities, workshops and performances. Ranging from an immersive art installation by Year 3 Information Systems undergraduate Chantel Ong to activities such as stamp-based art and botanical crafts, as well as captivating mash-ups by performing arts groups, these student-led initiatives showcased the creativity, collaboration and vibrancy that define the NUS student life experience.
Among them was NUS Esports, one of the student groups that has benefitted from this refurbishment. The group now conducts its training sessions and events in the esports room at YIH, a dedicated space where members can gather to play games, hang out and make new friends. With the sophisticated setup, they have seen increased interest from students and hope to attract more varsity players and members to the club.
NUS Esports President and Year 4 undergraduate at the College of Design and Engineering, You Wen Yi, Gio said, “Leveraging this unique space given to us by the school, we intend to host many more community-centric events in hopes of bringing together more passionate individuals and growing the Esports community.”
A vibrant space to learn, unwind, and connect
Built in 1977, YIH has long served as a social and cultural hub for generations of students, as well as the home of the NUS Students’ Union (NUSSU), the pinnacle of student governance in NUS. As NUSSU’s base, the building has become the historic nerve centre for student leadership development, hosting generations of changemakers advocating for the issues of their times.
“Beyond all the new facilities that you see, the actual foundation of this building hasn't really changed, and to me, that represents the solid foundation that past batches of student leaders have laid for us,” NUSSU President Stephen Chen said at the reopening. “On top of that, you also see that the walls are still quite fresh and barren, and these actually serve as a blank canvas for us to write our own legacy, to reimagine our initiatives, to see how we can push the boundaries of student life even further.”
In his address, Dr Balakrishnan, who is also a former NUSSU President, recalled fond memories of his time as an NUS student, noting that they were the best times of his life. He also encouraged students to “spend time doing things outside the classroom”, reminding them that “these are the real lessons of life”.
His remarks echoed the education philosophy behind NUSOne – which emphasises self-discovery and out-of-classroom learning to provide students with holistic education and allow them to develop to their full potential. With the reopening of YIH, the University hopes to drive greater momentum for student activities and advance the objectives of this initiative, which launched in August 2024.
To foster creativity, collaboration and connections, YIH offers a variety of thoughtfully designed spaces that not only allow students to learn but also to unwind and recharge. Its key features include:
Four levels of naturally ventilated open space – including a roof terrace – equipped with occupancy sensor-activated ceiling fans and lights, adaptable for use by individuals, small groups or for larger events
Multi-purpose rooms with an energy-efficient hybrid cooling system
Purpose-built spaces like music and dance studios, as well as PitStop@YIH, where students can unwind through recreational activities
Students also enjoy a variety of food options, including healthier choices, to refuel throughout the day. Some of these include:
Central Square @ YIH – an expanded canteen space on the ground floor with six stalls, including Havé, an Indian Vegetarian stall, Dapur Pandan, a Muslim food stall, and four other food kiosks
Speciality food options like Wild Skew bistro at Social Commons on Level 2 and CHAGEE’s first signing store in Southeast Asia – designed for the deaf and hard-of-hearing – on Level 1
24-hour convenience store and vending machines
Built for sustainability
More than just a physical upgrade, the newly refurbished YIH also reflects NUS’ strong commitment to core sustainability values. Led by NUS faculty members, the renovation embraced an adaptive reuse approach, retaining most of the building’s original structure to preserve the building’s heritage and embodied carbon, while incorporating new features such as:
Improved daylight access and greenery, with views of the Ridge – the secondary rainforest on campus
More naturally ventilated study spaces, equipped with occupancy sensor-activated ceiling fans and lights to reduce energy wastage
A hybrid cooling system which is more energy efficient and healthier for users, delivering 100 per cent fresh air with elevated fan speed at around 26.5°C, this maintains thermal comfort while reducing energy consumption
Designed as a Net-Zero Energy (NZE) targeted building, energy consumption at YIH is carefully managed to balance with on-site renewable energy generation through rooftop solar photovoltaic. Prior to its rejuvenation in 2022, YIH’s annual electricity consumption was approximately 2.2 gigawatt-hours (GWh). Following the renovation, the building’s annual energy use is projected to decrease by more than 1 GWh, slightly more than the yearly consumption of 215 four-room HDB flats.
WellNUS Festival 2025
Held in conjunction with the YIH reopening, the sixth edition of WellNUS Festival was a hallmark event dedicated to promoting mental, physical and emotional well-being. This year’s festival featured the inaugural WellNUS Resource Fair, which brought together a wide network of support services, including seven student support units from across the university and five external social service agencies, to provide wellness resources both on and off campus.
Open to both students and staff, the month-long festival features a diverse range of activities — from sports and crafts to health talks — aimed at supporting both mental and physical well-being. One of the popular activities was “Take a Paws”, which offered participants a chance to unwind with therapy dogs from The Dogtors. Designed as a 30-minute session, the initiative aimed to help students destress through guided interactions with the dogs and their handlers.
Alongside the lively WellNUS Festival, the reopening of YIH brought a renewed buzz and energy back to this iconic campus building. Beyond hosting university events, YIH is set to resume its role as a key gathering space for the NUS community.
Two leading experts in the field of carbon capture and sequestration (CCS) — Howard J. Herzog, a senior research engineer in the MIT Energy Initiative, and Niall Mac Dowell, a professor in energy systems engineering at Imperial College London — explore methods for removing carbon dioxide already in the atmosphere in their new book, “Carbon Removal.” Published in October, the book is part of the Essential Knowledge series from the MIT Press, which consists of volumes “synthesizing specialized sub
Two leading experts in the field of carbon capture and sequestration (CCS) — Howard J. Herzog, a senior research engineer in the MIT Energy Initiative, and Niall Mac Dowell, a professor in energy systems engineering at Imperial College London — explore methods for removing carbon dioxide already in the atmosphere in their new book, “Carbon Removal.” Published in October, the book is part of the Essential Knowledge series from the MIT Press, which consists of volumes “synthesizing specialized subject matter for nonspecialists” and includes Herzog’s 2018 book, “Carbon Capture.”
Burning fossil fuels, as well as other human activities, cause the release of carbon dioxide (CO2) into the atmosphere, where it acts like a blanket that warms the Earth, resulting in climate change. Much attention has focused on mitigation technologies that reduce emissions, but in their book, Herzog and Mac Dowell have turned their attention to “carbon dioxide removal” (CDR), an approach that removes carbon already present in the atmosphere.
In this new volume, the authors explain how CO2 naturally moves into and out of the atmosphere and present a brief history of carbon removal as a concept for dealing with climate change. They also describe the full range of “pathways” that have been proposed for removing CO2 from the atmosphere. Those pathways include engineered systems designed for “direct air capture” (DAC), as well as various “nature-based” approaches that call for planting trees or taking steps to enhance removal by biomass or the oceans. The book offers easily accessible explanations of the fundamental science and engineering behind each approach.
The authors compare the “quality” of the different pathways based on the following metrics:
Accounting. For public acceptance of any carbon-removal strategy, the authors note, the developers need to get the accounting right — and that’s not always easy. “If you’re going to spend money to get CO2 out of the atmosphere, you want to get paid for doing it,” notes Herzog. It can be tricky to measure how much you have removed, because there’s a lot of CO2 going in and out of the atmosphere all the time. Also, if your approach involves, say, burning fossil fuels, you must subtract the amount of CO2 that’s emitted from the total amount you claim to have removed. Then there’s the timing of the removal. With a DAC device, the removal happens right now, and the removed CO2 can be measured. “But if I plant a tree, it’s going to remove CO2 for decades. Is that equivalent to removing it right now?” Herzog queries. How to take that factor into account hasn’t yet been resolved.
Permanence. Different approaches keep the CO2 out of the atmosphere for different durations of time.How long is long enough? As the authors explain, this is one of the biggest issues, especially with nature-based solutions, where events such as wildfires or pestilence or land-use changes can release the stored CO2 back into the atmosphere. How do we deal with that?
Cost. Cost is another key factor. Using a DAC device to remove CO2 costs far more than planting trees, but it yields immediate removal of a measurable amount of CO2 that can then be locked away forever. How does one monetize that trade-off?
Additionality. “You’re doing this project, but would what you’re doing have been done anyway?” asks Herzog. “Is your effort additional to business as usual?” This question comes into play with many of the nature-based approaches involving trees, soils, and so on.
Permitting and governance. These issues are especially important — and complicated — with approaches that involve doing things in the ocean. In addition, Herzog points out that some CCS projects could also achieve carbon removal, but they would have a hard time getting permits to build the pipelines and other needed infrastructure.
The authors conclude that none of the CDR strategies now being proposed is a clear winner on all the metrics. However, they stress that carbon removal has the potential to play an important role in meeting our climate change goals — not by replacing our emissions-reduction efforts, but rather by supplementing them. However, as Herzog and Mac Dowell make clear in their book, many challenges must be addressed to move CDR from today’s speculation to deployment at scale, and the book supports the wider discussion about how to move forward. Indeed, the authors have fulfilled their stated goal: “to provide an objective analysis of the opportunities and challenges for CDR and to separate myth from reality.”
“Carbon Removal,” by MIT Energy Initiative Senior Research Engineer Howard Herzog (pictured) and Professor Niall Mac Dowell of Imperial College London, explores the history and intricacies of removing carbon dioxide from the Earth’s atmosphere.
At an age when many kids prefer to play games on their phones, 11-year-old Vivan Mirchandani wanted to explore physics videos. Little did he know that MIT Open Learning’s free online resources would change the course of his life. Now, at 16, Mirchandani is well on his way to a career as a physics scholar — all because he forged his own unconventional educational journey.Nontraditional education has granted Mirchandani the freedom to pursue topics he’s personally interested in. This year, he wrot
At an age when many kids prefer to play games on their phones, 11-year-old Vivan Mirchandani wanted to explore physics videos. Little did he know that MIT Open Learning’s free online resources would change the course of his life.
Now, at 16, Mirchandani is well on his way to a career as a physics scholar — all because he forged his own unconventional educational journey.
Nontraditional education has granted Mirchandani the freedom to pursue topics he’s personally interested in. This year, he wrote a paper on cosmology that proposes a new framework for understanding Einstein’s general theory of relativity. Other projects include expanding on fluid dynamics laws for cats, training an AI model to resemble the consciousness of his late grandmother, and creating his own digital twin. That’s in addition to his regular studies, regional science fairs, Model United Nations delegation, and a TEDEd Talk.
Mirchandani started down this path between the ages of 10 and 12, when he decided to read books and find online content about physics during the early Covid-19 lockdown in India. He was shocked to find that MIT Open Learning offers free course videos, lecture notes, exams, and other resources from the Institute on sites like MIT OpenCourseWare and the newly launched MIT Learn.
“My first course was 8.01 (Classical Mechanics), and it completely changed how I saw physics,” Mirchandani says. “Physics sounded like elegance. It’s the closest we’ve ever come to have a theory of everything.”
Experiencing “real learning”
Mirchandani discovered MIT Open Learning through OpenCourseWare, which offers free, online, open educational resources from MIT undergraduate and graduate courses. He says MIT Open Learning’s “academically rigorous” content prepares learners to ask questions and think like a scientist.
“Instead of rote memorization, I finally experienced real learning,” Mirchandani says. “OpenCourseWare was a holy grail. Without it, I would still be stuck on the basic concepts.”
Wanting to follow in the footsteps of physicists like Sir Isaac Newton, Albert Einstein, and Stephen Hawking, Mirchandani decided at age 12 he would sacrifice his grade point average to pursue a nontraditional educational path that gave him hands-on experience in science.
“The education system doesn’t prepare you for actual scientific research, it prepares you for exams,” Mirchandani says. “What draws me to MIT Open Learning and OpenCourseWare is it breaks the old model of education. It’s not about sitting in a lecture hall, it’s about access and experimentation.”
With guidance from his physics teacher, Mirchandani built his own curriculum using educational materials on MIT OpenCourseWare to progress from classical physics to computer science to quantum physics. He has completed more than 27 online MIT courses to date.
“The best part of OpenCourseWare is you get to study from the greatest institution in the world, and you don’t have to pay for it,” he says.
Innovating in the real world
6.0001 (Introduction to Computer Science and Programming Using Python) and slides from 2.06 (Fluid Dynamics) gave Mirchandani the foundation to help with the family business, Dynamech Engineers, which sells machinery for commercial snack production. Some of the recent innovations he has assisted with include a zero-oil frying technology that cuts 300 calories per kilogram, a gas-based heat exchange system, and a simplified, singular machine combining the processes of two separate machines. Using the modeling techniques he learned through MIT OpenCourseWare, Mirchandani designed how these products would work without losing efficiency.
But when you ask Mirchandani which achievement he is most proud of, he’ll say it’s being one of 35 students accepted for the inaugural RSI-India cohort, an academic program for high school students modeled after the Research Science Institute program co-sponsored by MIT and the Center for Excellence in Education. Competing against other Indian students who had perfect scores on their board exams and SATs, he didn’t expect to get in, but the program valued the practical research experience he was able to pursue thanks to the knowledge he gained from his external studies.
“None of it would have happened without MIT OpenCourseWare,” he says. “It’s basically letting curiosity get the better of us. If everybody does that, we’d have a better scientific community.”
“What draws me to MIT Open Learning and OpenCourseWare is it breaks the old model of education. It’s not about sitting in a lecture hall, it’s about access and experimentation,” says Vivan Mirchandani, seen here giving a TEDEd talk on cosmology. He has completed more than 27 online MIT courses to date.
Photos by Stephanie Mitchell/Harvard Staff Photographer
Campus & Community
‘How old are you? 70s? You’re JV, maybe next year.’
Alvin Powell
Harvard Staff Writer
October 16, 2025
8 min read
Olympic gold winner Bill Becklean has been at crew for 75 years, will be coxing boat of octogenarians at Head of Charles
Bill Becklean feels blessed to have taken up crew 75 years ago, when, as a 1
‘How old are you? 70s? You’re JV, maybe next year.’
Alvin Powell
Harvard Staff Writer
8 min read
Olympic gold winner Bill Becklean has been at crew for 75 years, will be coxing boat of octogenarians at Head of Charles
Bill Becklean feels blessed to have taken up crew 75 years ago, when, as a 14-year-old from Missouri, someone told him he’d make a decent coxswain for his high school team.
“I came from Kansas City, Missouri, and never heard of anything called ‘crew,’” said Becklean, 89, who had traveled east to attend Phillips Exeter Academy in New Hampshire. “I weighed 80 pounds, and somebody said, ‘Well, you ought to be a coxswain,’ and I was. Look where that has taken me.”
His most notable stop may be the 1956 summer Olympics in Melbourne, Australia, when his Yale University eight, with Becklean as coxswain, won gold. The latest will come Sunday on the Charles, when he will steer an extraordinary eight — one full of octogenarians — competing in the 2025 Head of the Charles: the Director’s Challenge mixed eights.
Becklean and his crew prepare for the upcoming competition on the Charles River.
Becklean’s boat is number 27 in the 30-boat field, which features eights with mixed men and women crews.
Becklean, who first volunteered for the Head of the Charles in 1966, is familiar with the three-mile course, which winds up the river separating Cambridge and Boston, passing under six bridges. He’s watched as the regatta has become one of the world’s largest and most prestigious.
Catherine Saarela, adult recreational sweeps coordinator at Brighton, Massachusetts-based Community Rowing Inc., is more sanguine about the Becklean’s group’s prospects.
It is a “set boat,” she said, meaning it’s well-balanced as it moves through the water, with consistent timing from the rowers, which increases efficiency and speed. Saarela said that’s probably because some of the rowers typically row singles, where they only have themselves to rely on to keep the boat centered and stable.
“It’s not the fastest boat,” Saarela said. “But it also hasn’t been the last boat to finish.”
The idea of an all-octogenarian boat was inspired by Saarela’s father, also an octogenarian and a cancer survivor.
About four years ago, he got an award from the YMCA for swimming 1,000 miles over the last 10 years, a feat Saarela pondered during her commute from her New Hampshire home to CRI’s Harry Parker Boathouse in Brighton. Once the idea of a boat crewed by rowers in their 80s popped up, it stuck around.
“I was coaching, and I’d go out and totally profile people: ‘Hey, do you know anyone who’s 80 and still rowing?’” Saarela said. “They’d say, ‘I’m not 80,’ and I’d say, ‘I didn’t ask if you were 80. Who do you know?’”
Seat by seat, she built the boat.
It was fellow referee Bill Barrett who suggested Becklean as coxswain.
“One of the other referees, who’s in his 80s, said, ‘Hey, you need a coxswain. You should reach out to Bill Becklean, he’s a gold medal coxswain,’ and I reached out,” Saarela said. “That’s how Bill became my partner in crime in all this.”
Edward Wertheim (from left), James Draper, DeLane Anderson, Peter McGowan, Judith Gillern, Priscilla Hoffnung, Elspeth McLaughlin, Anne Faber, and Becklean.
Saarela is just the latest to pull toward a shared goal with Becklean.
Over the decades he’s helped and been helped by many, in the sport and out, from the other skinny kids he shared a boat with at Phillips Exeter, to fellow volunteers in the Head of the Charles’ early years, to Yale’s medal-winning crew in Melbourne.
The road to gold was anything but smooth, Becklean said. Because Australia is in the southern hemisphere, the summer games were held in November, months after the end of the collegiate rowing season.
Instead of staying and training over the summer, the crew scattered to vacations and summer jobs.
They did get together on Connecticut’s Thames River for training sessions in the middle of August, but when the fall term started, the crew parted ways, and six, including Becklean, went back to campus.
“The first heats in Australia were the first race we’d had since the Olympic trials,” Becklean said.
And it showed. They lost their opening heat, which sent them to the repechage race, where losers of the early heats get a chance to fill open semifinal slots. Shaken by the initial loss, they won their repechage, then beat favorite Australia in the semifinals.
Both boats advanced, however, and Yale beat Australia again to win the gold.
Becklean graduated from Yale with an electrical engineering degree in 1958 and spent the next eight years out of rowing and in the Navy, where he worked on nuclear subs.
Then he left for Harvard Business School and the Charles River, where he couldn’t help noticing the Head of the Charles regatta, then in its second year. He approached legendary Harvard crew coach Harry Parker about volunteering.
A year later he joined the Cambridge Boat Club, where he now has been a member for more than 50 years. He is a past commodore and counts his coaching duties for Cambridge Rindge and Latin’s boys’ crew, based at the club, as “the best thing I’ve ever done.”
“I know this will change those boys’ lives, and it does,” Becklean said. “They learn how to show up on time. They learn how to follow instructions; they learn how to operate as a team; they learn that what you put in is what you get out. There are so many lessons that will be with them for their lives.”
Since then, he’s competed in the Head of the Charles regularly, coxing several boats and rowing, in singles or doubles, in 10 or 12 years.
“I have a doubles partner that I’ve rowed with for 25 years. We rowed it probably 10 times and won it one year, the first that they had an award for 70-year-olds,” Becklean said. “It was the year we turned 70, and we were the only boat in our age group. Then it got very competitive.”
Rooting for Becklean this year will be another rowing partner, Malcolm Salter, a former Harvard lightweight rower who’s been rowing doubles with him for several years.
Salter, HBS’ James J. Hill Professor of Business Administration, emeritus, and faculty fellow for Harvard lightweight crew, said he met Becklean six years ago when he visited the Cambridge Boat Club.
The two started talking and discovered that Becklean had been a student of Salter’s when Becklean was working toward his M.B.A. Now Salter, four years Becklean’s junior, has become something of a student of his.
“My goal is basically to be Bill Becklean,” Salter said. “My goal is to be able to be rowing in the Head of the Charles when I’m 90.”
Becklean, Salter said, has an eye for the sport’s more technical elements, things that can slow the boat: the errant splash of a misplaced blade or a seat sliding slightly out of time. Despite that, he doesn’t lose sight of the fact that rowing should be fun.
“He has total command and control of the boat because he has so much knowledge. He’s the ultimate collegial competitor,” Salter said. “He’s a carrier for the sport. He’s very supportive. He lives it. He loves to talk about it.”
Word has gotten out in the racing community, Saarela said, and older rowers are calling her.
This year — the fourth for the all-80s crew — she’ll have 15 rowers, with Becklean in the coxswain’s seat.
Seven will row downstream from Community Rowing on the Charles River to the race’s start near Boston University’s DeWolfe Boathouse, and seven more will race the three miles back.
One intrepid octogenarian will row both ways.
“I have not had a hard time filling seats,” Saarela said. “I’ve got one woman from Saratoga Springs who called me and wanted to do this. Plus, I’m a referee, I travel around, and I talk about it. I’ve got people who come up to me in their 70s: ‘Do you have an extra spot?’ I say, ‘How old are you? 70s? You’re JV, maybe next year.”
“Every time I come back from having rowed, it just feels so good. I’ll keep doing it as long as I can get in a boat. It’s been great fun.”
Becklean rowed his last single a few years ago after what for him was the “perfect” race: He had trained hard; conditions were ideal; he rowed well; and beat his desired time. He still rows doubles, though, and isn’t considering stepping away from the coxswain’s seat.
“I love the sport. My closest friends are all rowers. There are times when I’m driving in here and I say, ‘Do I really want to do this?’ but I’ll tell you, every time I come back from having rowed, it just feels so good,” Becklean said. “I’ll keep doing it as long as I can get in a boat. It’s been great fun.”
Campus & Community
Harvard reports operating deficit amid federal funding cuts
Harvard University.Stephanie Mitchell/Harvard Staff Photographer
October 16, 2025
long read
University continues to advance teaching and research in face of significant financial challenges
With the release of Harvard University’s fiscal year 2025 financial report, the Gazette sat down with Executive Vice President Meredi
Harvard reports operating deficit amid federal funding cuts
Harvard University.
Stephanie Mitchell/Harvard Staff Photographer
long read
University continues to advance teaching and research in face of significant financial challenges
With the release of Harvard University’s fiscal year 2025 financial report, the Gazette sat down with Executive Vice President Meredith Weenick and Vice President for Finance and Chief Financial Officer Ritu Kalra to discuss the University’s financial position amid evolving federal policy — from the termination and reinstatement of research funding to a forthcoming increase to the endowment tax.
This interview has been edited for clarity and length.
Fiscal year 2025 was marked by extraordinary disruption to Harvard’s research enterprise, including the termination of federally sponsored research grants. How did these developments affect the University’s financial position?
Kalra: The past year brought unprecedented challenges to Harvard and the higher education sector as a whole. The disruptions were particularly acute for our research community, but I don’t want to dismiss the breadth of the challenges across the University, including heightened concerns for our international students and scholars.
The termination of hundreds of research grants to Harvard did not occur in a vacuum. Heading into FY25, we were already contending with costs rising faster than our revenues, which, as we discussed last year, is not a sustainable path forward. The federal policy landscape began to change in early February, with proposals to cut contracted indirect cost reimbursement rates on active research and with terminations of dozens of individual awards affecting projects underway across our Schools. That was followed by the abrupt termination of nearly the entire portfolio of our direct federally sponsored research grants in the spring. About $116 million in sponsored funds — which are reimbursements for costs the University has already incurred — disappeared almost overnight.
It’s important to note that the terminations occurred on multi-year awards. The total value of the affected awards spans several years of work, not a single-year loss. The $116 million represents about two months of disrupted reimbursements in fiscal year 2025. Those disruptions continued in the ensuing months — the portfolio consists of over 900 direct awards to our faculty, at a cost of over $600 million a year. It has only been in the past few weeks, following Judge Allison Burrough’s summary judgment in U.S. District Court, that payments on those grants have been restored.
We closed the year with a $113 million operating deficit on a $6.7 billion revenue base. This is our first deficit since the pandemic.
Weenick: The result could have been far worse. Leadership across the University had already begun a process to identify strategic, structural, and sustainable reductions to our budgets, recognizing that changes to federal policy, including proposed caps on research-related funding and the impending increase to the endowment tax, would have significant consequences. While the outcome of individual policies was uncertain — and in some cases, still is — it was evident that, collectively, these measures would have a substantial financial impact. Accordingly, we instituted a hiring freeze across the University, paused salary increases for exempt staff, and delayed nonessential capital projects. We don’t take these measures lightly, and we are grateful to our community for their collective efforts.
We also benefited from the extraordinary generosity of our donors in support of the University. Their contributions were meaningful — over $100 million more than last year in current use giving, the highest in the University’s history. We are so grateful for their support.
Meredith Weenick.
Harvard file photo
Research is part of Harvard’s core mission. How has the University supported researchers through this period of uncertainty?
Weenick: The loss of federal funding was not only a financial challenge — it was a disruption to progress on critical, often lifesaving, research. Labs had to suspend experiments, research teams faced layoffs, and clinical trials were jeopardized.
Our response was immediate and multifaceted. We prioritized sustaining research continuity, both financially and operationally. The University created a $250 million research continuity fund, supplemented by additional resources from Schools, which ensured that critical projects could continue at the research-intensive Schools most impacted, including Harvard Medical School, the T.H. Chan School of Public Health, the Faculty of Arts and Sciences, and the John A. Paulson School of Engineering and Applied Sciences.
As Ritu mentioned, we’ve seen more than 900 awards reinstated and the payments for the majority of expenditures resume, following the federal court ruling that found those grant terminations to be unlawful. While this is welcome news, we must acknowledge that the relationship between research universities and the federal government is changing in fundamental ways. Looking ahead, we must be prepared for a future in which this long-standing partnership is less stable. Successfully navigating this period will depend on planning ahead, working together, and staying flexible.
With federal research grants now reinstated, how is Harvard managing the transition to resume research activity across the University?
Weenick: The court’s ruling was a critical step toward restoring a measure of stability for our research community. With funding now reinstated, researchers can once again focus on advancing the aims of their projects and rebuilding the momentum that was lost during the freeze.
At the same time, we’re being thoughtful about how we move forward. We’ve encouraged investigators to resume activity necessary to fulfill our commitments under the grants, but we’re also advising prudence, particularly in making new long-term or multi-year commitments. Our goal is to sustain the pace of discovery while planning responsibly amid uncertainty.
Kalra: From a financial standpoint, this transition is about sustaining our research commitments while staying disciplined. The reinstatement of grants has provided welcome relief, but we’re still managing the ripple effects of months of disruption and preparing for future volatility in both the funding of current projects and the outlook for new awards.
Ritu Kalra.
Harvard file photo
The endowment returned 11.9 percent in fiscal year 2025, bringing its value to $56.9 billion. How does this performance factor into Harvard’s financial health and capacity to address these challenges?
Kalra: We are deeply grateful to Narv Narvekar and his colleagues at Harvard Management Company for their stewardship of the endowment. Their disciplined, long-term approach is critical to ensuring that the endowment continues to provide stability and continuity, even through periods of great uncertainty. Distributions from the endowment provided $2.5 billion in support — nearly 40 percent of Harvard’s total operating revenue.
These funds are vital to Harvard’s financial health. And they are funds, plural; not singular. The endowment is made up of nearly 15,000 individual funds, the vast majority of which are given for a specific purpose. Endowed gifts made for museum collections cannot be repurposed to support research discoveries in neurobiology or astrophysics. They are not fungible or available to support purposes other than the donor’s intentions.
That’s also why the endowment is not an infinite source we can draw on to close structural budget gaps. It’s designed to provide steady, sustainable support across generations. A temporary increase in the draw from the endowment can provide momentary relief, but we are facing structural changes to the financial model that underpins higher education. We can use the endowment to support us in this transition, but we cannot use the endowment as a shield against reality.
Harvard is also facing an increase in the federal endowment tax, enacted by Congress earlier this year. How is the University preparing to manage the impact of that tax increase?
Kalra: We refer to the tax in shorthand as the endowment tax, but in reality, it is an excise tax on the income from our investments. The first year that Harvard will make tax payments at the new rate is fiscal year 2027, but we will have to recognize the obligation on our balance sheets this year — fiscal year 2026 — and we are working diligently, in partnership with our colleagues at HMC, to manage its financial impact.
While we are awaiting additional guidance from the U.S. Treasury Department, under the new law we expect Harvard’s tax obligation could be in the range of $300 million each year. That means hundreds of millions of dollars that will not be available to support financial aid, research, and teaching. To put that in a bit clearer context, in fiscal year 2025 Harvard spent over $750 million in financial aid and scholarships for students across the University. Every two to three years, we could be paying the equivalent of our entire financial aid budget in taxes.
That’s a significant impact and change to our financial planning, and it highlights why maintaining discipline today is so important. How we manage the endowment’s distribution in the near term will directly shape Harvard’s ability in the future to invest in its mission — and in the people and ideas that define it.
The University issued $1.2 billion in new debt this fiscal year. What drove that decision, and how does it position Harvard for the future?
Kalra: Issuing debt this year was a deliberate and strategic decision to bolster Harvard’s liquidity and maintain momentum on essential capital projects at a time of considerable uncertainty. Prudent use of debt is a normal part of how universities manage their finances — it allows us to invest responsibly in the facilities and infrastructure that support our academic mission over the long term.
The proceeds from the debt issuances help us continue to advance key priorities, including housing renewal projects and developments in Allston. Our AAA/Aaa ratings reflect external confidence in Harvard’s financial position and our commitment to navigate a complex and changing environment with discipline and foresight.
With increased financial pressures, how does Harvard balance near-term constraints with long-term commitments to its people and mission?
Weenick: Harvard’s strength has always been its people. Even as we tighten budgets, our focus remains on supporting faculty, students, and staff — ensuring that the work of discovery and education continues.
With payments on our grants largely restored — at least for now — we have begun to shift out of the research continuity framework we put in place in the spring and are allowing researchers to resume the full level of work necessary to advance the aims of their hard-won grants.
Kalra: The reinstatements of those grants do not erase the disruption the terminations sparked, nor do they negate the uncertainty ahead. That means we can’t simply return to “business as usual.” We need to focus our resources on academic excellence, operate more efficiently, and explore new revenue streams. The long-term trajectory of federal support for research, the increase to the endowment tax, and the broader economic climate require us to adapt structurally.
Harvard’s ability to endure and thrive has always depended on disciplined stewardship paired with bold investment in our mission. Our challenge now is to continue that tradition — to sustain academic excellence while being prudent stewards of the University’s resources.
What are the University’s financial priorities as we enter fiscal year 2026?
Kalra: The financial landscape remains extraordinarily challenging. The federal government has indicated it will appeal the court decision that reinstated our grants, and they’ve also indicated that they will consider other actions that could impact our ability to seek federal grant funding in the future. Other significant headwinds include proposed overall reductions in federal research funding and other policies that could upend the long-standing partnership between universities and the federal government as co-investors in the infrastructure that research relies on. The University has made extensive investments in the infrastructure that enables research — in wet labs and sophisticated equipment such as a cryo-electron microscope that allows for 3D imaging of wisps of proteins. Cuts to indirect cost recovery — coupled with an increase to the excise tax on endowment income — bear directly on our ability to invest in these essential tools of cutting-edge science. And we don’t yet know how the tariffs will unfold. Taken together, these factors could cost Harvard $1 billion annually.
Even against this backdrop, the University’s fiscal discipline, the resilience of our community, and the continued generosity of our donors should continue to position us well. Our focus will be on balancing prudence with purpose — managing expenses carefully, safeguarding liquidity, and sustaining investments that advance excellence in teaching and research. The circumstances are challenging, but our mission and our commitment to it remain unchanged.
Weenick: That’s right. These are challenging times, not just for Harvard but for universities throughout the nation. Every School across the University is feeling enormous pressure to adapt to a more constrained environment. We’ll need to work differently, rethink how we deliver services, and make difficult choices about what we can sustain. This is also an opportunity to reimagine how we work together to support Harvard’s mission. While periods like this are difficult, they help us focus on what’s essential and strengthen the institution for the long term.
Photo by Eric Granquist, Photo by Tony Rinaldo
Arts & Culture
Was ‘Aeneid’ critiquing or glorifying empire?
Liz Mineo
Harvard Staff Writer
October 16, 2025
8 min read
Authors of new translation dig into lasting impact of epic that Virgil wanted burned
Composed between 29 and 19 B.C. by the Roman poet Virgil, “The Aeneid” tells the story of the Trojan hero Aeneas and the founding
Authors of new translation dig into lasting impact of epic that Virgil wanted burned
Composed between 29 and 19 B.C. by the Roman poet Virgil, “The Aeneid” tells the story of the Trojan hero Aeneas and the founding of Rome while exploring themes of duty, fate, and struggle. Published after Virgil’s death in 19 B.C., despite his wish for it to be burned because he considered it unfinished, the epic poem remains a monumental work of classic literature.
Scott McGill and Susannah Wright, Ph.D. ’24, classics professors at Rice University, recently published a new translation of the poem with an introduction by Emily Wilson, the prominent classicist and translator of Homer’s “Iliad” and “Odyssey.” In an interview with the Gazette, which has been edited for clarity and length, Wright and McGill discussed the poem’s role as Rome’s national epic, and the questions it poses to modern readers as a celebration of imperial power and a meditation on the costs of conquest for both victors and victims.
How do you explain the poem’s enduring appeal to this day?
Wright: Above all, it is a really compelling story. It includes everything from epic battle scenes and adventures at sea to a tragic love affair and a journey to the underworld, and it also speaks to themes that remain relevant to many readers today — how to balance private desire and public duty, how to keep going when all seems lost, and how to determine the obligations we have to future generations. The poem is also very interested in questions about the ethics of migration, because the Trojans are, in a sense, at once both refugees and conquerors.
McGill: The poem also has a lot to tell us about the relationship between poetry and power because it was written as a national epic under the emperor Augustus, and in some ways, for Augustus. And yet, Virgil shows a great deal of artistic independence. It would appear that Virgil takes his poem in a different direction than we suspect Augustus would have anticipated. That possibly illustrates Virgil’s heroic temperament as an artist, and Augustus’ tolerance for different perspectives and different points of view.
Virgil died before the publication of “The Aeneid.” What do historians say about Virgil’s wishes to burn his work and why was it published regardless of his wishes?
McGill: There have been different interpretations. One is simply that Virgil had planned to revise “The Aeneid,” but he got sick and died. We know that the poem is incomplete because there are incomplete lines, and he would not have kept them in the work and had it published. It’s a story that may be meant to illustrate his perfectionism as a poet, but there is political interpretation, and this was taken up by the writer Hermann Broch, who wrote that Virgil’s wish to burn his poem was a protest because it could be used to justify absolute power. It’s completely speculative, but the story has been used to say that Virgil didn’t want to create a poem that can glorify autocracy. Of course, the follow-up to that story is that Augustus saved the poem, had one of Virgil’s friends edit it, and ordered its publication; and it has been in circulation ever since.
Wright: Virgil’s epic can certainly be read as glorifying the Augustan regime, and therefore as having a celebratory, perhaps even propagandistic approach to Roman power. But the other side is that Virgil takes a lot of care in illustrating the costs of that power. In many cases, he reserves his deepest sympathy for the characters who oppose the Trojans’ destined mission, those who resist them and who become victims of Aeneas and his followers — and in that way, the poem can also be interpreted as a sophisticated critique of Roman imperialism.
What do you hope your translation would reveal to new readers of “The Aeneid”?
McGill: We hope that it reveals the deep humanity of the poem, the profound sympathy that it shows to all its characters, and the richness of its emotional world. Epic poetry, especially if you’re a young reader, can feel like something you have to do, but not necessarily enjoy. Our hope is that the poem can come alive, and not only through the story, which is an incredible story, but also in the richness of its humane vision.
“We hope that it reveals the deep humanity of the poem, the profound sympathy that it shows to all its characters, and the richness of its emotional world.”
Scott McGill
Wright: There are two big misconceptions that readers often have about “The Aeneid.” One of them is that it’s merely a kind of propaganda piece for the Roman empire and for the Augustan regime. The other is that it’s a flat rehashing of Homer and isn’t engaging creatively with the “Iliad” and the “Odyssey.” To us, “The Aeneid” is neither of those things, but something much deeper and much richer. We hope that our translation will enable readers to connect with the immediacy and the urgency of the questions this ancient poem still poses today.
Virgil took inspiration from Homer’s masterpieces. What are the differences between the two authors?
McGill: Virgil is deeply reliant on Homer, but where Virgil takes Homer and the epic tradition in a new direction is in his understanding of and depiction of the hero. In Homer, the hero is concerned with his own glory, with accumulating fame, kleos in Greek. Virgil makes Aeneas a new kind of hero, who’s not so committed to his own kleos, but to history and to public duty, serving the Trojan refugees that he’s leading, and making sure that they get to Italy. Whereas the Homeric hero is all about accumulating personal glory, the Virgilian hero is about self-sacrifice; putting one’s own needs second to what one has to do for history and for the community.
“That illustrates the depth of Virgil’s engagement with Homer: this isn’t a flat reproduction of what we see in the ‘Iliad’ and ‘Odyssey,’ but a masterful and creative response to those poems. That’s part of the power of Virgil’s epic.”
Susannah Wright
Wright: Aeneas’ journey in the first half of the poem resembles the “Odyssey,” and the battlefield scenes in the second half certainly call to mind the “Iliad.” But it’s a complicated dynamic. When we leave Aeneas at the end of the poem, he’s caught up in a frenzied rage that might make us think of Achilles. At the end of the “Iliad,” though, Achilles shows mercy and has a profound moment of reconciliation with his enemy; Aeneas, in contrast, gives in to fury. That illustrates the depth of Virgil’s engagement with Homer: this isn’t a flat reproduction of what we see in the “Iliad” and “Odyssey,” but a masterful and creative response to those poems. That’s part of the power of Virgil’s epic.
How did “The Aeneid” come to be the founding myth of the Roman empire?
Wright: The story of Aeneas as a precursor to the Roman people goes back prior to Virgil, to the fifth and fourth centuries B.C., when Greek historians began to connect Aeneas with Rome as a founding figure and trace Roman ancestry back to the Trojans. Virgil doesn’t, by any means, invent Aeneas, but he gives Aeneas, who was a relatively minor figure in Homer’s “Iliad,” new primacy at the center of his epic. During Virgil’s time, Augustus emerged, after the Battle of Actium in 31 B.C., as the first emperor of Rome, and it was as this new regime was being consolidated in the 20s B.C. that Virgil wrote “The Aeneid.”
McGill: Augustus wanted to promote cultural renewal, which for him meant a return to Rome’s origins and its foundational virtues. “The Aeneid” certainly fits with that. It’s a two-pronged foundation myth for Rome, where you’ve got the Trojans on the one hand, and then you’ve got Romulus and Remus on the other. The Romans reconcile this by making Aeneas the ancestor of Romulus and Remus.
No translation is absolute. How close to the original is your translation of Virgil’s epic?
McGill: We certainly were committed to staying as close to the Latin as possible. We wanted to remain true to the substance and content and especially the tone and feel of the poem as well as the language. Virgil’s Latin largely comprises everyday words, and we wanted to make sure that our language reflected that as well. On the formal level, we translated the poem into blank verse; unrhymed iambic pentameter, which is the cultural equivalent to Virgil’s dactylic hexameter.
Wright: The particular metrical pattern that we chose has had a long life in the English tradition, as a meter associated with epic verse. In addition to employing a cultural equivalent to Virgil’s meter, we also did our best to capture his poetic devices and sound effects, because we wanted to approximate as much as we could the experience of reading the poem in Latin.
Great sound can transform an experience – think of how listening to music brightens a dreary commute, an eerie soundtrack heightens the tension in a movie scene, and comfortable conversation amid the natural cacophony of a busy kitchen and full house is possible in a restaurant with good acoustics. All this is thanks to audio engineering, a simultaneously creative and technical field that shapes our lives through music recordings, live sound at events, movie soundtracks, acoustics and more.At th
Great sound can transform an experience – think of how listening to music brightens a dreary commute, an eerie soundtrack heightens the tension in a movie scene, and comfortable conversation amid the natural cacophony of a busy kitchen and full house is possible in a restaurant with good acoustics. All this is thanks to audio engineering, a simultaneously creative and technical field that shapes our lives through music recordings, live sound at events, movie soundtracks, acoustics and more.
At the Yong Siew Toh Conservatory of Music, audio engineering is taught under the Audio Arts & Sciences (AAS) programme, which leverages its strategic positioning in the conservatory to provide a music-centred approach to the discipline. The curriculum includes music studies courses taken by all YST undergraduates, emphasising not just technical skills but also deep musical expertise.
Explained Associate Professor Zhou Xiaodong, who heads the AAS department: “They learn in the same classroom as composers and instrument players, because in their work, they will need to communicate well with musicians and conductors.”
The programme, which celebrates its 15th anniversary this year, takes inspiration from the prestigious audio science programme at the Peabody Institute of the Johns Hopkins University, where Assoc Prof Zhou earned his Master of Arts in Audio Science and Acoustics. Notably, the Peabody Institute formalised an agreement with NUS to support the initial development of the Conservatory back in 2001.
AAS was originally founded as Recording Arts and Sciences with the intention of growing Singapore’s small market for classical music recording, and over the years, it has evolved in response to student feedback and industry changes. The name change to Audio Arts and Sciences in 2018 marked an expansion of its scope from recording to music production, sound design and modern sound technology.
Honing expertise through a hands-on approach
Like other majors in YST, the AAS cohorts are small, with just four to five students admitted each year. Students who apply to AAS need not have audio engineering experience, but are expected to have a strong music background and play at least one instrument.
The first two years of the undergraduate programme focus on the foundations of music production and recording, critical listening and acoustics, after which students deepen their knowledge into the various fields of music post-production, live sound and sound for visuals in their third year. They also complete a compulsory internship in their final year.
Courses are taught by Assoc Prof Zhou, Ms Calla Lim (an AAS alumna from the Class of 2023) and a network of industry professionals, some of whom are regular instructors while others conduct ad hoc masterclasses. Involving industry professionals helps to ensure the skills taught match industry expectations, said Assoc Prof Zhou. Students learn the latest techniques from such instructors, accompany them to work on projects at major venues like the Esplanade and Victoria Concert Hall and get hands-on training with modern technology owned by their commercial recording studios.
Leo Wynn Chan, a Year 4 AAS student who studied audio technology at polytechnic before enrolling in YST, especially appreciates this aspect of the programme. Some valuable connections he has made among the AAS lecturers include his current internship supervisor, Mr Yen Yu Ting, sound editor and designer at music and sound design studio GRYD, who teaches audio for film and television at YST; and Mr Shah Tahir, a veteran sound designer and engineer who served as sound designer for multiple editions of Singapore’s National Day Parade and teaches live sound reinforcement at YST.
“Meeting industry professionals and working with them – seeing how they work – are very rare opportunities. But we get to do this just by going through this programme and the masterclasses,” Leo Wynn said.
Small class sizes create an intimate learning experience and ensure that every student gets plenty of hands-on practice with the equipment, both during lessons and through projects outside the classroom. AAS students work on live sound and archival recordings for YST’s more than 300 music events each year and support requests to use the school’s recording studio, which come from YST students and NUS bands as well as professional musicians and orchestras outside NUS.
Said Ms Lim: “Our teaching adopts a hands-on approach and emphasises the importance of first-hand practice. Nowadays, you can learn a lot from books and information on the internet, and AI can even generate a full personalised lesson plan. But none of this compares to the actual valuable experience of working with musicians.”
Aside from technical and musical expertise, the YST experience equips students with soft skills like relationship management that are crucial to succeeding in the industry after graduation. Mr Daniel Wong (BMus 2017), a sound designer, composer and music producer who has designed sound effects for Halloween Horror Nights for Universal Studios Singapore and worked on music arrangements for the 2024 Christmas Wonderland event at Gardens by the Bay, shared that he still finds work with friends he made during his university days at YST.
“This profession is a lot about people – client servicing and connections. While being an excellent engineer is essential, being kind, professional, and having a network of supportive connections can make all the difference in your success.”
Nothing ventured, nothing gained
Beyond the curriculum, the AAS programme at YST can open doors for curious students who take the initiative to broaden their horizons. For example, students can tap YST’s international network of partnerships with other universities and educational institutions around the world for overseas learning opportunities, such as a six-day course on Sound Systems Design and Optimisation offered by the Royal Conservatoire of The Hague in the Netherlands and a three-week training programme on film sound production at the Beijing Film Academy.
Ad hoc projects with external parties can also pave the way to less common career paths, as they did for Mr Conrad Chung (BMus 2014), who currently researches hearing loss in musicians while teaching audiology at the National University Hospital (NUH). Mr Chung discovered his interest in the field when he helped audiology master’s students and speech therapists from NUH to produce recordings for their research and clinical work. The connections he made through these projects helped him to secure an internship with the hospital’s ear, nose and throat department and eventually led to him pursuing further studies to become an audiologist.
His advice to students pursuing audio engineering is to build a diverse skill set and be prepared for a flexible work schedule, which may not always follow a traditional salary structure.
“The good thing about this programme is that you get to try a lot of things – recorded sound, live sound, acoustics and video. There is the potential to go deeper into any of them, but you have to put in the work and make your own education more enriching,” said Mr Chung.
He added: “Working in this field, every day will be different, and you’ll get to meet new people, work on new projects and listen to new music. You’ll be closely in touch with the art scene, which is very dynamic. Don’t expect a 9-5, but it will be fun.”
The surface of the lungs is covered with a fluid that increases their deformability. This fluid has the greatest effect when you take deep breaths from time to time, as researchers at ETH Zurich have discovered using sophisticated measurement techniques in the laboratory.
The surface of the lungs is covered with a fluid that increases their deformability. This fluid has the greatest effect when you take deep breaths from time to time, as researchers at ETH Zurich have discovered using sophisticated measurement techniques in the laboratory.
Industrial food processing is often the target of criticism – unjust criticism, says Patrick Rühs, as many processing steps actually make foods healthier and easier to digest.
Industrial food processing is often the target of criticism – unjust criticism, says Patrick Rühs, as many processing steps actually make foods healthier and easier to digest.
Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog-owner to do while onsite.But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog. To a
Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog-owner to do while onsite.
But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.
To address this shortcoming, researchers from MIT, the MIT-IBM Watson AI Lab, the Weizmann Institute of Science, and elsewhere have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.
Their method uses carefully prepared video-tracking data in which the same object is tracked across multiple frames. They designed the dataset so the model must focus on contextual clues to identify the personalized object, rather than relying on knowledge it previously memorized.
When given a few example images showing a personalized object, like someone’s pet, the retrained model is better able to identify the location of that same pet in a new image.
Models retrained with their method outperformed state-of-the-art systems at this task. Importantly, their technique leaves the rest of the model’s general abilities intact.
This new approach could help future AI systems track specific objects across time, like a child’s backpack, or localize objects of interest, such as a species of animal in ecological monitoring. It could also aid in the development of AI-driven assistive technologies that help visually impaired users find certain items in a room.
“Ultimately, we want these models to be able to learn from context, just like humans do. If a model can do this well, rather than retraining it for each new task, we could just provide a few examples and it would infer how to perform the task from that context. This is a very powerful ability,” says Jehanzeb Mirza, an MIT postdoc and senior author of a paper on this technique.
Mirza is joined on the paper by co-lead authors Sivan Doveh, a postdoc at Stanford University who was a graduate student at Weizmann Institute of Science when this research was conducted; and Nimrod Shabtay, a researcher at IBM Research; James Glass, a senior research scientist and the head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and others. The work will be presented at the International Conference on Computer Vision.
An unexpected shortcoming
Researchers have found that large language models (LLMs) can excel at learning from context. If they feed an LLM a few examples of a task, like addition problems, it can learn to answer new addition problems based on the context that has been provided.
A vision-language model (VLM) is essentially an LLM with a visual component connected to it, so the MIT researchers thought it would inherit the LLM’s in-context learning capabilities. But this is not the case.
“The research community has not been able to find a black-and-white answer to this particular problem yet. The bottleneck could arise from the fact that some visual information is lost in the process of merging the two components together, but we just don’t know,” Mirza says.
The researchers set out to improve VLMs abilities to do in-context localization, which involves finding a specific object in a new image. They focused on the data used to retrain existing VLMs for a new task, a process called fine-tuning.
Typical fine-tuning data are gathered from random sources and depict collections of everyday objects. One image might contain cars parked on a street, while another includes a bouquet of flowers.
“There is no real coherence in these data, so the model never learns to recognize the same object in multiple images,” he says.
To fix this problem, the researchers developed a new dataset by curating samples from existing video-tracking data. These data are video clips showing the same object moving through a scene, like a tiger walking across a grassland.
They cut frames from these videos and structured the dataset so each input would consist of multiple images showing the same object in different contexts, with example questions and answers about its location.
“By using multiple images of the same object in different contexts, we encourage the model to consistently localize that object of interest by focusing on the context,” Mirza explains.
Forcing the focus
But the researchers found that VLMs tend to cheat. Instead of answering based on context clues, they will identify the object using knowledge gained during pretraining.
For instance, since the model already learned that an image of a tiger and the label “tiger” are correlated, it could identify the tiger crossing the grassland based on this pretrained knowledge, instead of inferring from context.
To solve this problem, the researchers used pseudo-names rather than actual object category names in the dataset. In this case, they changed the name of the tiger to “Charlie.”
“It took us a while to figure out how to prevent the model from cheating. But we changed the game for the model. The model does not know that ‘Charlie’ can be a tiger, so it is forced to look at the context,” he says.
The researchers also faced challenges in finding the best way to prepare the data. If the frames are too close together, the background would not change enough to provide data diversity.
In the end, finetuning VLMs with this new dataset improved accuracy at personalized localization by about 12 percent on average. When they included the dataset with pseudo-names, the performance gains reached 21 percent.
As model size increases, their technique leads to greater performance gains.
In the future, the researchers want to study possible reasons VLMs don’t inherit in-context learning capabilities from their base LLMs. In addition, they plan to explore additional mechanisms to improve the performance of a VLM without the need to retrain it with new data.
“This work reframes few-shot personalized object localization — adapting on the fly to the same object across new scenes — as an instruction-tuning problem and uses video-tracking sequences to teach VLMs to localize based on visual context rather than class priors. It also introduces the first benchmark for this setting with solid gains across open and proprietary VLMs. Given the immense significance of quick, instance-specific grounding — often without finetuning — for users of real-world workflows (such as robotics, augmented reality assistants, creative tools, etc.), the practical, data-centric recipe offered by this work can help enhance the widespread adoption of vision-language foundation models,” says Saurav Jha, a postdoc at the Mila-Quebec Artificial Intelligence Institute, who was not involved with this work.
Additional co-authors are Wei Lin, a research associate at Johannes Kepler University; Eli Schwartz, a research scientist at IBM Research; Hilde Kuehne, professor of computer science at Tuebingen AI Center and an affiliated professor at the MIT-IBM Watson AI Lab; Raja Giryes, an associate professor at Tel Aviv University; Rogerio Feris, a principal scientist and manager at the MIT-IBM Watson AI Lab; Leonid Karlinsky, a principal research scientist at IBM Research; Assaf Arbelle, a senior research scientist at IBM Research; and Shimon Ullman, the Samy and Ruth Cohn Professor of Computer Science at the Weizmann Institute of Science.
This research was funded, in part, by the MIT-IBM Watson AI Lab.
When Ong Hua Han (NUS Business ’16) was considering his university options in 2012, NUS was not an obvious choice. As a wheelchair user with osteogenesis imperfecta, also known as brittle bone disease, navigating the University’s steep terrain seemed daunting.But a warm meeting with the then Assistant Dean and Director of the Bachelor of Business Administration (BBA) programmes at NUS Business School, Dr Helen Chai, changed his mind. Over tea and a personalised tour of the school, she allayed hi
When Ong Hua Han (NUS Business ’16) was considering his university options in 2012, NUS was not an obvious choice. As a wheelchair user with osteogenesis imperfecta, also known as brittle bone disease, navigating the University’s steep terrain seemed daunting.
But a warm meeting with the then Assistant Dean and Director of the Bachelor of Business Administration (BBA) programmes at NUS Business School, Dr Helen Chai, changed his mind. Over tea and a personalised tour of the school, she allayed his family’s concerns. “Though the campus’ hilly terrain was not the easiest to navigate, Dr Chai reassured us that people were there to support me,” recalls Hua Han. “And that sealed the deal.”
At matriculation, Hua Han discovered the NUS Enablers, a student group set up in 2011 to support students with special needs. Drawn by their mission to raise awareness and make the campus more accessible, Hua Han signed up. He later led the group’s then-flagship event, "Wheel-a-thon," a creative race that challenged participants to experience campus through the perspective of a wheelchair user.
At the time, accessibility support at NUS was largely unstructured. That changed in 2014 with the establishment of the Student Accessibility Unit (SAU), formerly the Disability Support Office, under the Office of Student Affairs. SAU became a game changer, consolidating academic, logistical and social support for students with disabilities under one roof. No longer would students have to knock on multiple doors to get the help they needed.
SAU currently supports 3 per cent of the NUS student population who identify with having disabilities and accessibility needs, working closely with university partners and student advocates to foster a culture of inclusion.
"Our vision for an accessible learning environment goes beyond providing support and opportunity — it’s about recognising each student as an individual," said Ms Agnes Yuen, who heads the SAU and is a pioneering member of the team. "Every connection reminds us that our shared humanity is reflected in honouring every lived experience, enabling students to thrive beyond labels and disabilities."
Beyond coordinating an accessible mobility van service alongside the University Campus Infrastructure (UCI) to facilitate travel across faculties, Hua Han recalled how SAU’s partnership with SG Enable helped him secure his first internship at a Big Four accounting firm, and a subsequent one at a global bank where he now works full-time as a relationship manager in their corporate banking division.
Today, he continues the advocacy work he began at NUS, leading initiatives in his workplace to increase participation and awareness for people with disabilities.
"The internship experiences offered me a window into my potential career path, helping me imagine new opportunities that lie ahead."
Finding identity and community
For final-year Pharmacy student Nicolette Koh, who is hard of hearing, the support from SAU has been more of a journey within. She reached out to SAU immediately after accepting her place in NUS in 2022, unsure of what to expect. What she found was a safe, non-judgmental space where she was treated as an equal. SAU equipped her with real-time speech-to-text transcription software, which helped her follow lectures more effectively. But the biggest gift, she says, was something less tangible: identity.
“Before university, I felt like I had to hide my disability,” she said. “Through SAU and learning sign language at NUS, I found a sense of community and realised being deaf isn’t a weakness—it’s just who I am.”
Encouraged by SAU, Nicolette went on to serve as President of the NUS Enablers in Academic Year 2023/2024, where she organised disability awareness talks in collaboration with SPD, the university’s first Purple Parade contingent in 2023, and led inclusive student initiatives like SIGNapse—a student-run programme that bridges communication gaps between the deaf community and students entering healthcare professions.
For her work in championing community causes and raising awareness for people with disabilities, Nicolette received the NUS Community Impact Scholarship awarded to outstanding NUS undergraduates who exhibit leadership qualities in the areas of Environmental Sustainability and/or Community Service, in addition to the Leadership Distinction award at the NUS Achievement Awards 2024 – a prestigious and only University-level award recognising the exceptional achievements of student life beyond academics.
Allies in action
Nicolette’s work also inspired students like Loo Quan Xuan, a final-year Sociology major who joined NUS Enablers in his second year. Expecting to learn from others, he soon discovered his role as an active ally. He later took on the role of Vice President of NUS Enablers in 2024 and led initiatives like the Human Library and worked with SAU to pioneer a dialogue series exploring themes of disability pride, allyship and ableism.
“What I learned is that inclusion is not just about physical access—it’s about the right to be part of conversations, to share your lived experience, and to be heard,” he said. “Sometimes the most powerful thing you can do is just listen.”
Towards a whole-of-campus approach
This ethos of inclusion also guides the work of the SAU, which works closely with students to raise awareness, drive broader inclusivity and capacity-building to co-create a more inclusive campus.
For example, a recent collaboration with the University Campus Infrastructure team, Singapore Land Authority (SLA) and student volunteers saw the mapping of barrier-free routes across campus, allowing users to navigate NUS via the OneMap app, with more informed accessibility information.
SAU has also collaborated with the NUS Centre for Teaching, Learning and Technology and the Office of the Provost to implement Universal Design for Learning principles, such as live captioning, into classrooms, to benefit all students, beyond those with accessibility needs.
“Supporting students with disabilities is never the work of one unit alone,” said Agnes. “It takes a whole-of-campus approach. This underlies our commitment to build and deepen awareness, skills and empathy of our staff and student community, empowering them to be effective and inclusive allies and educators.”
These efforts not only address practical challenges but also reduce the social and emotional burden of asking for help. With inclusion woven into everyday campus life, students are encouraged to learn, participate, and thrive in that belonging.
A culture of inclusion, a decade on
For Hua Han, the impact of NUS’ inclusive culture is deeply personal. A turning point was his student exchange in Maastricht, where—despite fracturing his tibia during his first month—he experienced the independence of living abroad. “If I could handle an injury without the comfort and care of my family, then there are many things I can do,” he reflected.
Today, national conversations around mental health, neurodiversity, and disability continue to shift perspectives. On campus, SAU organises the Inclusive Fest and Accessibility Activation, creating opportunities for dialogue, connection, and growth among the community.
“We are seeing more students coming forward to begin the conversation with us, and that’s a positive shift,” said Agnes. “It reflects greater trust, confidence, and acceptance for the university to journey with them.”
As Hua Han aptly put it: “Back then, we relied on individual champions. Now, there’s a whole ecosystem of support.” Still returning to campus to mentor and engage, he remains part of that growing community of champions—proof that when inclusion takes root, its impact lasts well beyond graduation.
Researchers from NUS have pioneered a photocatalytic atom-swapping transformation that converts oxetanes into a variety of four-membered saturated cyclic molecules, which are key scaffolds in medicinal chemistry. By introducing a new synthetic blueprint for these prized drug motifs, this discovery could potentially streamline the synthesis of pharmaceuticals and complex drug analogues that would otherwise require multi-step routes. The research team was led by Associate Professor Koh Ming Joo fr
Researchers from NUS have pioneered a photocatalytic atom-swapping transformation that converts oxetanes into a variety of four-membered saturated cyclic molecules, which are key scaffolds in medicinal chemistry. By introducing a new synthetic blueprint for these prized drug motifs, this discovery could potentially streamline the synthesis of pharmaceuticals and complex drug analogues that would otherwise require multi-step routes.
The research team was led by Associate Professor Koh Ming Joo from the NUS Department of Chemistry, together with Assistant Professor Zhang Xinglong from The Chinese University of Hong Kong, Hong Kong, China.
The research breakthrough was published in the scientific journal Natureon 15 October 2025.
Non-aromatic (saturated) heterocycles and carbocycles form the skeleton of countless bioactive and functional molecules. Four-membered saturated cyclic molecules such as azetidines, thietanes and cyclobutanes are increasingly valued in drug discovery for their desirable physicochemical properties, such as potency, stability, metabolic stability and target specificity. However, the traditional retrosynthetic approaches typically deconstruct the ring into simpler starting materials that have to be prepared separately through numerous steps. This approach is often energy- and time-consuming and generates excessive waste, particularly in the assembly of complex drug molecules.
Assoc Prof Koh said, “The conventional way of constructing four-membered rings employs cycloaddition or nucleophilic substitution chemistries that limit the range of obtainable molecular scaffolds. There is an urgent need to design a new approach that not only simplifies the synthesis of small-ring pharmacophores but also unlocks uncharted regions of the chemical space.”
New atom-swapping logic accelerates synthesis of non-aromatic drug scaffolds
The researchers developed a skeletal editing strategy that selectively exchanges the oxygen atom of an oxetane building block for another functional group (nitrogen, sulfur, or carbon), through reaction with appropriate reagents. This transformation process is achieved by using a photocatalyst to break the oxetane ring into a reactive dibromide compound under visible light. The ring is then rebuilt using different nucleophiles to give a range of four-membered heterocycles and carbocycles in one pot. Computational studies by Asst Prof Zhang’s group provided insights into the underlying mechanism and the origins of the high chemoselectivity.
To demonstrate the value of their method, the researchers successfully simplified the preparation of advanced drug intermediates, reducing the number of synthetic steps from 8 to 12 steps down to four steps, delivering substantial cost savings and waste reduction. The researchers also applied their method to the late-stage editing of complex bioactive oxetanes to obtain heterocyclic drug candidates with enhanced properties, bypassing the need to make them from scratch.
“Our atom-swapping manifold offers a convenient diversification platform to transform readily accessible oxetane feedstocks into different classes of high-value saturated cyclic compounds in one operation. This would empower chemists in their synthetic endeavours by providing new opportunities in making cyclic functional molecules for important applications such as drug discovery,” added Assoc Prof Koh.
Studies are ongoing to extend the methodology to heterocyclic drug compounds of various ring sizes relevant to therapeutics.
Arts & Culture
When your English teacher writes a book on Taylor Swift
Professor Stephanie Burt examines star’s influence, work ethic, why her music matters
Eileen O’Grady
Harvard Staff Writer
October 15, 2025
7 min read
Stephanie Burt.Stephanie Mitchell/Harvard Staff Photographer
When Stephanie Burt decided to carry a pink and blue Taylor Swift tote bag to class one day in fall 20
When your English teacher writes a book on Taylor Swift
Professor Stephanie Burt examines star’s influence, work ethic, why her music matters
Eileen O’Grady
Harvard Staff Writer
7 min read
Stephanie Burt.
Stephanie Mitchell/Harvard Staff Photographer
When Stephanie Burt decided to carry a pink and blue Taylor Swift tote bag to class one day in fall 2023, she just thought it would be a fun way to transport her books and laptop, and let her students know she was a Swiftie.
“No part of this journey was anything I expected,” said Burt, who realized there was interest in a Swift course after some students spotted her bag and asked her to supervise their independent research work on the pop singer. “I figured I would have a Taylor Swift seminar, and there would be eight or 10 or 15 Swifties, and we’d have a big wooden table, and we would talk about how her songs worked and study them in conjunction with relevant poems and novels and other pieces of writing. Then 200 people showed up.”
In her new book, “Taylor’s Version: The Poetic and Musical Genius of Taylor Swift,” Burt draws on both her teachings and her experience as both literary critic and pop music enthusiast to examine the singer-songwriter’s body of work, and the community of fans it inspires.
In this edited conversation with the Gazette, Burt shares her thoughts on Swift’s influence as a songwriter who remains both aspirational and relatable, and defined by an intense work ethic.
Did teaching the Taylor Swift course give you any new perspectives, either about Swift or about teaching?
I got to know her work better, especially the parts of her work that were not already my favorites.
I got to know the first album much better than I would have otherwise — you can really hear her becoming herself in real time as you go from one song to the next.
“I got to know the first album much better than I would have otherwise — you can really hear her becoming herself in real time as you go from one song to the next. “
And I got to know her album “Reputation,” which is probably still her most controversial album, where she makes the most visible break from what she had been trying to do before.
I also learned a lot about the ups and downs and the entertainment value and the limits of teaching a very large humanities course.
I don’t know that I’m going to try to teach a course of that size again. I certainly expect the Taylor class to run every few years for as long as there’s student interest, but I may make it a seminar next time.
In your book, you suggest that Swift’s success is due, in part, to singing songs she wrote herself. Why is that important?
If you go back far enough in the history of Western song forms, there’s no reason why the vocal interpreter of the song needs to be the person who wrote the song.
But after the rise of the Beatles and Bob Dylan in the mid-’60s, you get a strong expectation in parts of popular music — and an openness to it in others — that the songwriter be the singer. The openness provides an opportunity for Taylor when she’s in her teens starting out and sees herself principally as a songwriter.
One of the reasons why she walked away from her first chance at a major label contract in Nashville was that if she had gone with that contract, she would have been required to sing songs she didn’t write.
I think it’s largely accurate to see all of the things that Taylor has learned to do as being in the service of getting the songs that she writes. She learns to be a dancer; she learns co-organize a stage company as someone who’s very hands-on about her own touring; she gets to be a better and more consistent singer; she learns to put a tremendous amount of energy into fan relations and other kinds of marketing.
She demonstrates all of these skills, without which she would not have become the economic titan that she is.
What do you mean when you write that Swift is both aspirational and relatable?
She’s able to write many different songs that allow a sympathetic listener at once to say, “I have a lot in common with her, she really gets me,” and “I like to imagine being her, I wish I was more like her.”
The fact that she can do both of those things, not only in the same career but in the same song, and that she does them so often and with such formal variety, that’s really the core of why she’s so good.
“The fact that she can do both of those things, not only in the same career but in the same song, and that she does them so often and with such formal variety, that’s really the core of why she’s so good.”
Look at the title track from “Red.” On the one hand, it would be great to be able to imagine the thrill of being in a Maserati going many miles an hour — you’re in this sleek machine; life is going by; and everything is excitement.
On the other hand, many of us have had the experience of falling for someone and feeling like things are happening so fast that it’s scary. The sense, from “Red,” is that she’s been there too. She is able to write words and music that are both attractive and exciting to think about and also reach out to who we’ve already been.
You also write, importantly, about Swift’s work ethic.
She does a lot of work to get and stay popular: interviews, going on TV, doing appearances, touring. Within the songs, she writes about how she can’t stop working. She can’t slow down.
The song ”Mastermind” is probably the best example of that. She is always afraid things will be taken away from her, and she’s always trying to get to the next thing and sort of outrun her brain weasels.
I think a lot of us see that aspect in our own lives, especially people at Harvard. If you never stop working and you never slow down, you’ll never have to confront your self-doubt. That’s honestly one of the reasons that I personally have gradually become such a big Swiftie — there’s a lot I want to do. If I slow down and stop, maybe I won’t be worth anything, and maybe it will all just go away, so I have to never stop working.
Fortunately, I like working. It’s not terribly healthy, but it’s very American, and it’s one of the things Taylor sings about.
Has writing and teaching Swift changed the way that you listen to her music now?
One of the ways that I know that I’ve made a good choice when I decide to write about something is that when you make a good choice you’re more into it at the end than at the start.
I was at a karaoke event last week, and I sang “I Can Do it With a Broken Heart.” It is a song that requires you to breathe in challenging, counterintuitive ways.
Taylor tells you at the beginning of the second verse, “I can hold my breath / I’ve been doin’ it since he left.” It’s one more case of playful self-referentiality.
I learned even more about the song, which I had already written about at length. With my favorite Taylor works, there’s just always more there.
A decade-plus collaboration between MIT’s AgeLab and the Toyota Motor Corporation is recognized as a key contributor to advancements in automotive safety and human-machine interaction. Through the AgeLab at the MIT Center for Transportation and Logistics (CTL), researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.Toyota recently marked the completion of its 100th project through the Collaborative Safety Resear
A decade-plus collaboration between MIT’s AgeLab and the Toyota Motor Corporation is recognized as a key contributor to advancements in automotive safety and human-machine interaction. Through the AgeLab at the MIT Center for Transportation and Logistics (CTL), researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.
Toyota recently marked the completion of its 100th project through the Collaborative Safety Research Center (CSRC), celebrating MIT’s role in shaping technologies that enhance driver-assistance features and continue to forge the path for automated mobility. A key foundation for the 100th project is CSRC’s ongoing support for MIT CTL’s Advanced Vehicle Technology (AVT) Consortium.
Real-world data, real-world impact
“AVT was conceptualized over a decade ago as an academic-industry partnership to promote shared investment in real-world, naturalistic data collection, analysis, and collaboration — efforts aimed at advancing safer, more convenient, and more comfortable automobility,” says Bryan Reimer, founder and co-director of AVT. “Since its founding, AVT has drawn together over 25 organizations — including vehicle manufacturers, suppliers, insurers, and consumer research groups — to invest in understanding how automotive technologies function, how they influence driver behavior, and where further innovation is needed. This work has enabled stakeholders like Toyota to make more-informed decisions in product development and deployment.”
“CSRC’s 100th project marks a significant milestone in our collaboration,” Reimer adds. “We deeply value CSRC’s sustained investment, and commend the organization’s commitment to global industry impact and the open dissemination of research to advance societal benefit.”
“Toyota, through its Collaborative Safety Research Center, is proud to be a founding member of the AVT Consortium,” says Jason Hallman, senior manager of Toyota CSRC. “Since 2011, CSRC has collaborated with researchers such as AVT and MIT AgeLab on projects that help inform future products and policy, and to promote a future safe mobility society for all. The AVT specifically has helped us to study the real-world use of several vehicle technologies now available.”
Among these technologies are lane-centering assistance and adaptive cruise control — widely-used technologies that benefit from an understanding of how drivers interact with automation. “AVT uniquely combines vehicle and driver data to help inform future products and highlight the interplay between the performance of these features and the drivers using them,” says Josh Domeyer, principal scientist at CSRC.
Influencing global standards and Olympic-scale innovation
Insights from MIT’s pedestrian-driver interaction research with CSRC also helped shape Toyota’s automated vehicle communication systems. “These data helped develop our foundational understanding that drivers and pedestrians use their movements to communicate during routine traffic encounters,” said Domeyer. “This concept informed the deployment of Toyota’s e-Palette at the Tokyo Olympics, and it has been captured as a best practice in an ISO standard for automated driving system communication.”
The AVT Consortium's naturalistic driving datasets continue to serve as a foundation for behavioral safety strategies. From identifying moments of distraction to understanding how drivers multitask behind the wheel, the work is guiding subtle but impactful design considerations.
“By studying the natural behaviors of drivers and their contexts in the AVT datasets, we hope to identify new ways to encourage safe habits that align with customer preferences,” Domeyer says. “These can include subtle nudges, or modifications to existing vehicle features, or even communication and education partnerships outside of Toyota that reinforce these safe driving habits.”
Professor Yossi Sheffi, director of MIT CTL, comments, “This partnership exemplifies the impact of MIT collaborative research on industry to make real, practical innovation possible.”
A model for industry-academic collaboration
Founded in 2015, the AVT Consortium brings together automotive manufacturers, suppliers, and insurers to accelerate research in driver behavior, safety, and the transition toward automated systems. The consortium’s interdisciplinary approach — integrating engineering, human factors, and data science — has helped generate one of the world’s most unique and actionable real-world driving datasets.
As Toyota celebrates its research milestone, MIT reflects on a partnership that exemplifies the power of industry-academic collaboration to shape safer, smarter mobility.
Through MIT’s AgeLab and Advanced Vehicle Technology Consortium, researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.
A decade-plus collaboration between MIT’s AgeLab and the Toyota Motor Corporation is recognized as a key contributor to advancements in automotive safety and human-machine interaction. Through the AgeLab at the MIT Center for Transportation and Logistics (CTL), researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.Toyota recently marked the completion of its 100th project through the Collaborative Safety Resear
A decade-plus collaboration between MIT’s AgeLab and the Toyota Motor Corporation is recognized as a key contributor to advancements in automotive safety and human-machine interaction. Through the AgeLab at the MIT Center for Transportation and Logistics (CTL), researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.
Toyota recently marked the completion of its 100th project through the Collaborative Safety Research Center (CSRC), celebrating MIT’s role in shaping technologies that enhance driver-assistance features and continue to forge the path for automated mobility. A key foundation for the 100th project is CSRC’s ongoing support for MIT CTL’s Advanced Vehicle Technology (AVT) Consortium.
Real-world data, real-world impact
“AVT was conceptualized over a decade ago as an academic-industry partnership to promote shared investment in real-world, naturalistic data collection, analysis, and collaboration — efforts aimed at advancing safer, more convenient, and more comfortable automobility,” says Bryan Reimer, founder and co-director of AVT. “Since its founding, AVT has drawn together over 25 organizations — including vehicle manufacturers, suppliers, insurers, and consumer research groups — to invest in understanding how automotive technologies function, how they influence driver behavior, and where further innovation is needed. This work has enabled stakeholders like Toyota to make more-informed decisions in product development and deployment.”
“CSRC’s 100th project marks a significant milestone in our collaboration,” Reimer adds. “We deeply value CSRC’s sustained investment, and commend the organization’s commitment to global industry impact and the open dissemination of research to advance societal benefit.”
“Toyota, through its Collaborative Safety Research Center, is proud to be a founding member of the AVT Consortium,” says Jason Hallman, senior manager of Toyota CSRC. “Since 2011, CSRC has collaborated with researchers such as AVT and MIT AgeLab on projects that help inform future products and policy, and to promote a future safe mobility society for all. The AVT specifically has helped us to study the real-world use of several vehicle technologies now available.”
Among these technologies are lane-centering assistance and adaptive cruise control — widely-used technologies that benefit from an understanding of how drivers interact with automation. “AVT uniquely combines vehicle and driver data to help inform future products and highlight the interplay between the performance of these features and the drivers using them,” says Josh Domeyer, principal scientist at CSRC.
Influencing global standards and Olympic-scale innovation
Insights from MIT’s pedestrian-driver interaction research with CSRC also helped shape Toyota’s automated vehicle communication systems. “These data helped develop our foundational understanding that drivers and pedestrians use their movements to communicate during routine traffic encounters,” said Domeyer. “This concept informed the deployment of Toyota’s e-Palette at the Tokyo Olympics, and it has been captured as a best practice in an ISO standard for automated driving system communication.”
The AVT Consortium's naturalistic driving datasets continue to serve as a foundation for behavioral safety strategies. From identifying moments of distraction to understanding how drivers multitask behind the wheel, the work is guiding subtle but impactful design considerations.
“By studying the natural behaviors of drivers and their contexts in the AVT datasets, we hope to identify new ways to encourage safe habits that align with customer preferences,” Domeyer says. “These can include subtle nudges, or modifications to existing vehicle features, or even communication and education partnerships outside of Toyota that reinforce these safe driving habits.”
Professor Yossi Sheffi, director of MIT CTL, comments, “This partnership exemplifies the impact of MIT collaborative research on industry to make real, practical innovation possible.”
A model for industry-academic collaboration
Founded in 2015, the AVT Consortium brings together automotive manufacturers, suppliers, and insurers to accelerate research in driver behavior, safety, and the transition toward automated systems. The consortium’s interdisciplinary approach — integrating engineering, human factors, and data science — has helped generate one of the world’s most unique and actionable real-world driving datasets.
As Toyota celebrates its research milestone, MIT reflects on a partnership that exemplifies the power of industry-academic collaboration to shape safer, smarter mobility.
Through MIT’s AgeLab and Advanced Vehicle Technology Consortium, researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.
Health
Researchers report ‘astounding’ obesity surge in U.S.
Mass General Brigham Communications
October 15, 2025
4 min read
Prevalence rises to 70 percent under definition that includes measures other than just BMI
The prevalence of obesity in the U.S. could rise sharply under a new definition of the condition released earlier this year by the Lancet Diabetes and Endocrinology Commission, accord
Researchers report ‘astounding’ obesity surge in U.S.
Mass General Brigham Communications
4 min read
Prevalence rises to 70 percent under definition that includes measures other than just BMI
The prevalence of obesity in the U.S. could rise sharply under a new definition of the condition released earlier this year by the Lancet Diabetes and Endocrinology Commission, according to research co-authored by Harvard-Mass General specialists.
Investigators from Harvard and Mass General Brigham found that when applying the new criteria, which expand upon the traditional use of body mass index (BMI) to include measures of body fat distribution, the prevalence of obesity increased from about 40 percent to about 70 percent among more than 300,000 people included in the study. The rise was more pronounced among older adults.
Additionally, the researchers found that the newly added individuals also had a higher risk of adverse health outcomes. The results are published in JAMA Network Open.
“We already thought we had an obesity epidemic, but this is astounding,” said co-first author Lindsay Fourman, a Mass General endocrinologist and an assistant professor at Harvard Medical School. “With potentially 70 percent of the adult population now considered to have excess fat, we need to better understand what treatment approaches to prioritize.”
Traditionally, obesity has been defined by BMI, which estimates body fat based on a person’s weight and height. But other anthropomorphic measures — such as waist circumference, waist-to-height ratio, or waist-to-hip ratio — may further account for fat distribution and aid in differentiation between muscle and fat mass.
Under the new framework, a person is classified as having obesity if they have a high BMI plus at least one elevated anthropometric measure (a condition the authors term “BMI-plus-anthropometric obesity”), or if they have a normal BMI and at least two elevated anthropometric measures (a condition termed “anthropometric-only obesity”).
The new definition also distinguishes between preclinical and clinical obesity, with clinical obesity defined as the presence of obesity-related physical impairment or organ dysfunction. At least 76 organizations have endorsed the new guidelines, including the American Heart Association and the Obesity Society.
The study analyzed participants in the National Institutes of Health All of Us Research Program’s cohort of more than 300,000 Americans. Obesity prevalence was 68.6 percent with the new definition, versus 42.9 percent under the traditional BMI-based definition. This increase was entirely driven by inclusion of individuals with anthropometric-only obesity. Obesity rates varied by sex, race, and especially by age — affecting nearly 80 percent of adults over 70.
Importantly, the study found that those with anthropometric-only obesity — who would not have been classified as having obesity by the traditional definition — had a higher risk of diabetes, cardiovascular disease, and mortality than people without obesity. About half of all individuals who met the new obesity criteria had clinical obesity, and this proportion was only slightly lower in the anthropometric-only obesity group compared with the BMI-plus-anthropometric obesity group.
“We have always recognized the limitations of BMI as a single marker for obesity because it doesn’t take into account body fat distribution,” said senior author Steven Grinspoon of Mass General and Harvard Medical School. “Seeing an increased risk of cardiovascular disease and diabetes in this new group of people with obesity, who were not considered to have obesity before, brings up interesting questions about obesity medications and other therapeutics.”
The researchers emphasized that further studies are needed to better understand the causes of and optimal treatments for anthropometric-only obesity. The team previously developed a therapeutic that reduces waist circumference, and plans to explore the utility of different treatment strategies in this newly defined population.
“Identifying excess body fat is very important as we’re finding that even people with a normal BMI but with abdominal fat accumulation are at increased health risk,” Fourman said. “Body composition matters — it’s not just pounds on a scale.”
The research described in this article received funding from the National Institutes of Health.
The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and Engineering. Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University. The honorees are among 20 junior faculty named among the nation’s m
The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and Engineering. Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University.
The honorees are among 20 junior faculty named among the nation’s most innovative early-career scientists and engineers. Each Packard Fellow receives an unrestricted research grant of $875,000 over five years to support their pursuit of pioneering research and bold new ideas.
“I’m incredibly grateful and honored to be awarded a Packard Fellowship,” says McRose. “It will allow us to continue our work exploring how small molecules control microbial communities in soils and on plant roots, with much-appreciated flexibility to follow our imagination wherever it leads us.”
McRose and her lab study secondary metabolites — small organic molecules that microbes and plants release into soils. Often known as antibiotics, these compounds do far more than fight infections; they can help unlock soil nutrients, shape microbial communities around plant roots, and influence soil fertility.
“Antibiotics made by soil microorganisms are widely used in medicine, but we know surprisingly little about what they do in nature,” explains McRose. “Just as healthy microbiomes support human health, plant microbiomes support plant health, and secondary metabolites can help to regulate the microbial community, suppressing pathogens and promoting beneficial microbes.”
Her lab integrates techniques from genetics, chemistry, and geosciences to investigate how these molecules shape interactions between microbes and plants in soil — one of Earth’s most complex and least-understood environments. By using secondary metabolites as experimental tools, McRose aims to uncover the molecular mechanisms that govern processes like soil fertility and nutrient cycling that are foundational to sustainable agriculture and ecosystem health.
Studying antibiotics in the environments where they evolved could also yield new strategies for combating soil-borne pathogens and improving crop resilience. “Soil is a true scientific frontier,” McRose says. “Studying these environments has the potential to reveal fascinating, fundamental insights into microbial life — many of which we can’t even imagine yet.”
A native of California, McRose earned her bachelor’s and master’s degrees from Stanford University, followed by a PhD in geosciences from Princeton University. Her graduate thesis focused on how bacteria acquire trace metals from the environment. Her postdoctoral research on secondary metabolites at Caltech was supported by multiple fellowships, including the Simons Foundation Marine Microbial Ecology Postdoctoral Fellowship, the L’Oréal USA For Women in Science Fellowship, and a Division Fellowship from Biology and Biological Engineering at Caltech.
McRose joined the MIT faculty in 2022. In 2025, she was named a Sloan Foundation Research Fellow in Earth System Science and awarded the Maseeh Excellence in Teaching Award.
Past Packard Fellows have gone on to earn the highest honors, including Nobel Prizes in chemistry and physics, the Fields Medal, Alan T. Waterman Awards, Breakthrough Prizes, Kavli Prizes, and elections to the National Academies of Science, Engineering, and Medicine. Each year, the foundation reviews 100 nominations for consideration from 50 invited institutions. The Packard Fellowships Advisory Panel, a group of 12 internationally recognized scientists and engineers, evaluates the nominations and recommends 20 fellows for approval by the Packard Foundation Board of Trustees.
Environmental microbiologist Darcy McRose is one of 20 innovative early-career scientists and engineers named to the 2025 class of Packard Fellows for Science and Engineering. Each fellow receives $875,000 over five years to pursue their research.
The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and Engineering. Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University. The honorees are among 20 junior faculty named among the nation’s m
The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and Engineering. Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University.
The honorees are among 20 junior faculty named among the nation’s most innovative early-career scientists and engineers. Each Packard Fellow receives an unrestricted research grant of $875,000 over five years to support their pursuit of pioneering research and bold new ideas.
“I’m incredibly grateful and honored to be awarded a Packard Fellowship,” says McRose. “It will allow us to continue our work exploring how small molecules control microbial communities in soils and on plant roots, with much-appreciated flexibility to follow our imagination wherever it leads us.”
McRose and her lab study secondary metabolites — small organic molecules that microbes and plants release into soils. Often known as antibiotics, these compounds do far more than fight infections; they can help unlock soil nutrients, shape microbial communities around plant roots, and influence soil fertility.
“Antibiotics made by soil microorganisms are widely used in medicine, but we know surprisingly little about what they do in nature,” explains McRose. “Just as healthy microbiomes support human health, plant microbiomes support plant health, and secondary metabolites can help to regulate the microbial community, suppressing pathogens and promoting beneficial microbes.”
Her lab integrates techniques from genetics, chemistry, and geosciences to investigate how these molecules shape interactions between microbes and plants in soil — one of Earth’s most complex and least-understood environments. By using secondary metabolites as experimental tools, McRose aims to uncover the molecular mechanisms that govern processes like soil fertility and nutrient cycling that are foundational to sustainable agriculture and ecosystem health.
Studying antibiotics in the environments where they evolved could also yield new strategies for combating soil-borne pathogens and improving crop resilience. “Soil is a true scientific frontier,” McRose says. “Studying these environments has the potential to reveal fascinating, fundamental insights into microbial life — many of which we can’t even imagine yet.”
A native of California, McRose earned her bachelor’s and master’s degrees from Stanford University, followed by a PhD in geosciences from Princeton University. Her graduate thesis focused on how bacteria acquire trace metals from the environment. Her postdoctoral research on secondary metabolites at Caltech was supported by multiple fellowships, including the Simons Foundation Marine Microbial Ecology Postdoctoral Fellowship, the L’Oréal USA For Women in Science Fellowship, and a Division Fellowship from Biology and Biological Engineering at Caltech.
McRose joined the MIT faculty in 2022. In 2025, she was named a Sloan Foundation Research Fellow in Earth System Science and awarded the Maseeh Excellence in Teaching Award.
Past Packard Fellows have gone on to earn the highest honors, including Nobel Prizes in chemistry and physics, the Fields Medal, Alan T. Waterman Awards, Breakthrough Prizes, Kavli Prizes, and elections to the National Academies of Science, Engineering, and Medicine. Each year, the foundation reviews 100 nominations for consideration from 50 invited institutions. The Packard Fellowships Advisory Panel, a group of 12 internationally recognized scientists and engineers, evaluates the nominations and recommends 20 fellows for approval by the Packard Foundation Board of Trustees.
Environmental microbiologist Darcy McRose is one of 20 innovative early-career scientists and engineers named to the 2025 class of Packard Fellows for Science and Engineering. Each fellow receives $875,000 over five years to pursue their research.
To help mitigate climate change, companies are using bioreactors to grow algae and other microorganisms that are hundreds of times more efficient at absorbing CO2 than trees. Meanwhile, in the pharmaceutical industry, cell culture is used to manufacture biologic drugs and other advanced treatments, including lifesaving gene and cell therapies.Both processes are hampered by cells’ tendency to stick to surfaces, which leads to a huge amount of waste and downtime for cleaning. A similar problem slo
To help mitigate climate change, companies are using bioreactors to grow algae and other microorganisms that are hundreds of times more efficient at absorbing CO2 than trees. Meanwhile, in the pharmaceutical industry, cell culture is used to manufacture biologic drugs and other advanced treatments, including lifesaving gene and cell therapies.
Both processes are hampered by cells’ tendency to stick to surfaces, which leads to a huge amount of waste and downtime for cleaning. A similar problem slows down biofuel production, interferes with biosensors and implants, and makes the food and beverage industry less efficient.
Now, MIT researchers have developed an approach for detaching cells from surfaces on demand, using electrochemically generated bubbles. In an open-access paper published in Science Advances, the researchers demonstrated their approach in a lab prototype and showed it could work across a range of cells and surfaces without harming the cells.
“We wanted to develop a technology that could be high-throughput and plug-and-play, and that would allow cells to attach and detach on demand to improve the workflow in these industrial processes,” says Professor Kripa Varanasi, senior author of the study. “This is a fundamental issue with cells, and we’ve solved it with a process that can scale. It lends itself to many different applications.”
Joining Varanasi on the study are co-first authors Bert Vandereydt, a PhD student in mechanical engineering, and former postdoc Baptiste Blanc.
Solving a sticky problem
The researchers began with a mission.
“We’ve been working on figuring out how we can efficiently capture CO2 across different sources and convert it into valuable products for various end markets,” Varanasi says. “That’s where this photobioreactor and cell detachment comes into the picture.”
Photobioreactors are used to grow carbon-absorbing algae cells by creating tightly controlled environments involving water and sunlight. They feature long, winding tubes with clear surfaces to let in the light algae need to grow. When algae stick to those surfaces, they block out the light, requiring cleaning.
“You have to shut down and clean up the entire reactor as frequently as every two weeks,” Varanasi says. “It’s a huge operational challenge.”
The researchers realized other industries have similar problem due to many cells’ natural adhesion, or stickiness. Each industry has its own solution for cell adhesion depending on how important it is that the cells survive. Some people scrape the surfaces clean, while others use special coatings that are toxic to cells.
In the pharmaceutical and biotech industries, cell detachment is typically carried out using enzymes. However, this method poses several challenges — it can damage cell membranes, is time-consuming, and requires large amounts of consumables, resulting in millions of liters of biowaste.
To create a better solution, the researchers began by studying other efforts to clear surfaces with bubbles, which mainly involved spraying bubbles onto surfaces and had been largely ineffective.
“We realized we needed the bubbles to form on the surfaces where we don’t want these cells to stick, so when the bubbles detach it creates a local fluid flow that creates shear stress at the interface and removes the cells,” Varanasi explains.
Electric currents generate bubbles by splitting water into hydrogen and oxygen. But previous attempts at using electricity to detach cells were hampered because the cell culture mediums contain sodium chloride, which turns into bleach when combined with an electric current. The bleach damages the cells, making it impractical for many applications.
“The culprit is the anode — that’s where the sodium chloride turns to bleach,” Vandereydt explained. “We figured if we could separate that electrode from the rest of the system, we could prevent bleach from being generated.”
To make a better system, the researchers built a 3-square-inch glass surface and deposited a gold electrode on top of it. The layer of gold is so thin it doesn’t block out light. To keep the other electrode separate, the researchers integrated a special membrane that only allows protons to pass through. The set up allowed the researchers to send a current through without generating bleach.
To test their setup, they allowed algae cells from a concentrated solution to stick to the surfaces. When they applied a voltage, the bubbles separated the cells from the surfaces without harming them.
The researchers also studied the interaction between the bubbles and cells, finding the higher the current density, the more bubbles were created and the more algae was removed. They developed a model for understanding how much current would be needed to remove algae in different settings and matched it with results from experiments involving algae as well as cells from ovarian cancer and bones.
“Mammalian cells are orders of magnitude more sensitive than algae cells, but even with those cells, we were able to detach them with no impact to the viability of the cell,” Vandereydt says.
Getting to scale
The researchers say their system could represent a breakthrough in applications where bleach or other chemicals would harm cells. That includes pharmaceutical and food production.
“If we can keep these systems running without fouling and other problems, then we can make them much more economical,” Varanasi says.
For cell culture plates used in the pharmaceutical industry, the team envisions their system comprising an electrode that could be robotically moved from one culture plate to the next, to detach cells as they’re grown. It could also be coiled around algae harvesting systems.
“This has general applicability because it doesn’t rely on any specific biological or chemical treatments, but on a physical force that is system-agnostic,” Varanasi says. “It’s also highly scalable to a lot of different processes, including particle removal.”
Varanasi cautions there is much work to be done to scale up the system. But he hopes it can one day make algae and other cell harvesting more efficient.
“The burning problem of our time is to somehow capture CO2 in a way that’s economically feasible,” Varanasi says. “These photobioreactors could be used for that, but we have to overcome the cell adhesion problem.”
The work was supported, in part, by Eni S.p.A through the MIT Energy Initiative, the Belgian American Educational Foundation Fellowship, and the Maria Zambrano Fellowship.
To test their setup, researchers allowed algae cells to stick to the surface of the photobioreactor. When they applied a voltage, the bubbles separated the cells from the surfaces without harming them.
Nation & World
Putin doesn’t care what we think
Journalists Baker and Glasser explore how Russian president has interpreted, defied ambitions of U.S. leaders
Christina Pazzanese
Harvard Staff Writer
October 15, 2025
6 min read
Russian President Vladimir Putin.Getty Images
Vladimir Putin, who rose to power in late 1999 as acting president of the Russian Federation following the resigna
Journalists Baker and Glasser explore how Russian president has interpreted, defied ambitions of U.S. leaders
Christina Pazzanese
Harvard Staff Writer
6 min read
Russian President Vladimir Putin.
Getty Images
Vladimir Putin, who rose to power in late 1999 as acting president of the Russian Federation following the resignation of Boris Yeltsin, is three years from surpassing Josef Stalin as Russia’s longest-serving leader of the last two centuries.
On Oct. 7 — Putin’s 73rd birthday — the Gazette spoke with Susan Glasser ’90, a staff writer for The New Yorker, and her husband, Peter Baker, chief White House correspondent for The New York Times, about the Russian leader and the five U.S. presidents he’s faced during his 25-plus years in office, a topic they’re exploring for a forthcoming book. Glasser and Baker, fall fellows at the Institute of Politics, reported from Moscow on Putin’s early years in office, work that shaped their 2005 book “Kremlin Rising.”
This conversation has been edited for clarity and length.
When you were reporting from Moscow, did you ever envision Putin would still be in power 25 years later?
Glasser: No, and anyone who tells you that they did is not being honest (laughs). There were plenty of people who were very clear-eyed from the beginning about Putin’s origins in the KGB, his suspicions of the West, his lamentation about Russia’s lost empire. We knew plenty of people in Moscow and some people in the United States who saw that he was no democrat, that he was no long-term friend of the United States. That was not such an outlier belief. I can’t think of a single person, whatever their ideological views about Russia, who saw in Putin the possibility to break Stalin’s record.
Peter Baker and Susan Glasser at Harvard.
Harvard file photo
Which U.S. president had the best read on Putin?
Baker: What we found striking is that all of them, Republican or Democrat, came into office thinking they could manage him. And to each of them, Putin has been a singular challenge who proves not to be manageable. They did not, in fact, get out of him what they thought they could get out of him or keep him where they thought they could keep him, all the way up to the present day.
Glasser: One of the things that is notable is that many of them started out the Putin era with the idea that Russia’s period of post-Soviet weakness was just going to continue. The big mistake, according to this argument, would be not that we misread Putin (although I think many did) but that we misread Russia, and we made the mistake of believing our own fairy tale about a world where geopolitical Great Power competition somehow had faded away. That, obviously, was a misreading of history. We certainly didn’t foresee a “no limits” alliance between Russia and China, and made the mistake of projecting forward the rift between Russia and China that had persisted from the late Soviet era.
American presidents, Democrats and Republicans, made the mistake of overvaluing a personal relationship with Putin, as if it would really matter in the end against these bigger geopolitical forces. It looked bad at the time, and it looks even worse now, when George W. Bush said that he “looked into his soul and saw Putin as a man he could do business with.” That overlooked Putin’s background in the KGB and his habit of saying what his interlocutors wanted to hear. It’s so important to underscore that multiple presidents, not just George W. Bush, made inaccurate judgments about who Putin was and what he was going to do.
Is there one U.S. president Putin preferred or particularly disliked over the others?
Baker: I think, at first, he thought he had a good relationship with George W. Bush; that they did see eye-to-eye. The two of them met more than any American and Russian leaders in history, and they were friendly. And yet, from Putin’s point of view, what he would say is, “I hear these things from presidents, they all say these things, but the policy stays the same.” And I think he has come to the conclusion that it doesn’t matter who is president because America is still going to do what America does, and that the president can say whatever they want, but it doesn’t change that. Even with Trump, whatever Putin thought of Trump coming in — whether he was somebody who could be manipulated, somebody he could manage — I think he still came away from his experiences with Trump thinking, “I didn’t get what I want out of this.” It’s not about the individual president.
What’s a fundamental misunderstanding people have about Putin?
Glasser: For years, many of the Russia experts in and out of the government that we respected the most were of the view that Putin was an insecure leader, and that a lot of what he did was driven by the need to shore up a not very strong system or to stay in power or to make sure that unrest didn’t happen inside Russia.
When he came back to power after the Medvedev interregnum and there were huge protests in the streets of St. Petersburg and Moscow, it was the biggest challenge to Putin’s tenure in the entire quarter century. And so, I think a smart explanation has been that from then on forward, Putin was determined to have a much tighter control on Russia — that he pivoted right in terms of how he governed the country and in terms of his foreign policy, that he was not tolerant of any kind of dissent, and that that really set him on the path that we see today. That is a compelling explanation.
Baker: He doesn’t care about what we care about. He doesn’t care about what we think he should care about, and we should stop trying to see him through our own lens because that’s not the way he sees the world. We think he should want his country to be strong economically and part of a vibrant international economy and world order, that that would be better for his people. Certainly, when we were there, we saw even in four years how much life got better for a lot of Russians because they were more integrated with the West, because they were economically developing. And he didn’t care that he closed it off. He doesn’t care that tens of thousands, hundreds of thousands, of Russians have been killed or wounded in Ukraine. He doesn’t care that more than a million Russians have left the country — the brain drain from Russia is staggering. He doesn’t care that he has squandered a lot of things that he could have claimed as his successes over the last 25 years with this invasion of Ukraine. We need to see him in a clear-eyed way, rather than trying to see him through our own Western lens.
Health
How sexism in medicine continues to endanger women’s health
Breast cancer specialist Elizabeth Comen (right) with Janet Rich-Edwards.Photos by Veasey Conway/Harvard Staff Photographer
Alvin Powell
Harvard Staff Writer
October 15, 2025
5 min read
Radcliffe symposium explores persistent bias in care today, from marginalizing heart disease symptoms to over-diagnosing anxiety
It is pa
How sexism in medicine continues to endanger women’s health
Breast cancer specialist Elizabeth Comen (right) with Janet Rich-Edwards.
Photos by Veasey Conway/Harvard Staff Photographer
Alvin Powell
Harvard Staff Writer
5 min read
Radcliffe symposium explores persistent bias in care today, from marginalizing heart disease symptoms to over-diagnosing anxiety
It is past time for women’s health to move beyond “boobs and tubes” — as one expert termed the field’s reproductive focus — to address the disparities and prejudice that have hindered medical providers from effectively treating more than half of the population.
That’s according to experts who gathered for the Gellert Family Science Symposium held recently at the Radcliffe Institute for Advanced Study examining persistent gaps between men’s and women’s healthcare.
The event’s keynote speaker was Elizabeth Comen, a breast cancer specialist and author of the 2024 book “All in Her Head: The Truth and Lies Early Medicine Taught Us about Women’s Bodies and Why It Matters Today.” She pointed out that women have twice the risk of Alzheimer’s disease as men, and that heart attack symptoms common among women such as jaw pain and indigestion are described medically as “atypical.”
“There’s so much focus on our reproductive health but there are so many other aspects to women’s health, like atypical heart disease,” said Comen, associate professor of health and co-director of the Women’s Health Collective at NYU Langone Health. “But when I was in Medical School all I learned about was that women’s chest pain was ‘atypical’ even though we’re greater than 50 percent of the population and heart disease is the No. 1 killer of women.”
Comen, who graduated from Harvard Medical School in 2004 and Harvard College in 2000, reviewed the history of how the medical system has discounted women’s health concerns. It partly stems from the fact that historically doctors were male and viewed what was normal and abnormal through the lens of the male body. Women were viewed as emotional to the extent that female “hysteria” was in the Diagnostic and Statistical Manual of Mental Disorders until 1980, Comen said. Prevailing attitudes focused on control, promoted shame, and spotlighted women’s sexuality out of proportion to their actual healthcare needs.
Women were viewed as emotional to the extent that female “hysteria” was in the Diagnostic and Statistical Manual of Mental Disorders until 1980.
Elizabeth Comen
“The question is, how does this show up in our healthcare system today? Because we’re so evolved. But are we?” said Comen, who was interviewed by Janet Rich-Edwards, director of life course epidemiology at Brigham and Women’s Hospital and associate professor at Harvard Medical School and the Harvard T.H. Chan school of Public Health.
The legacy of those attitudes endures today, Comen said, in examining rooms where physicians speaking with male cancer patients are twice as likely to discuss potential sexual dysfunction than they are with women in the same circumstances, and in plastic surgery suites where, anecdotally, male doctors routinely suggest larger breast implants, and in hospitals where anxiety is over-diagnosed in women, dismissing valid health concerns.
She cited a case of a 34-year-old metastatic breast cancer patient whose disease had spread throughout her body extensively enough that Comen thought she didn’t have long to live. The woman, who had been orphaned, had no family support, and had lived through significant trauma, received medication that dramatically reduced the cancer. A few years later when the pandemic hit, the woman got COVID and was admitted with stuttered speech, tremors, and other stroke-like symptoms, but was dismissed as being anxious. Comen insisted on a neurology consult.
“I was thinking, I’ve met this woman when she was the most anxious she’s ever been in her life. Something is really wrong,” Comen said. “In the next room, I had a woman who also had COVID, no history of anxiety, and was eating her hand to her bone. There was something neurological going on in these women’s brains. I didn’t know what it was. There are many stories of this, when you think about endometriosis and the things that women have gone through and been told it’s all in their heads, when we really just didn’t invest in figuring out what’s going on.”
The picture of women’s health isn’t all bad, speakers said. The factors that have buoyed health in the general population — improved sanitation, vaccines, antibiotics — have not passed women by. In fact, women globally have longer life expectancies than men. Deborah Kado, professor of medicine, chief of geriatric research at Stanford University School of Medicine, and co-director of the Stanford Center on Longevity, said a girl born today can expect to live to 100, decades longer than expectations a century ago. Even that apparent good news, though has a dark lining: Women can expect to spend 12 to 15 years at the end of their lives in poor health, significantly longer than men.
Colin Hill, co-founder and chief executive officer of Aitia, which is leveraging artificial intelligence to discover new therapeutics, said things are about to change for the better. Medical science understands disease at a fundamental level — the interplay of genes and proteins — in a woefully small number of conditions, just 5 percent. The powerful artificial intelligence tools being developed and deployed today have the potential to change the medical landscape, leading to deeper understandings of the other 95 percent.
“We’re now on the precipice of real change across a number of diseases, especially those diseases that particularly affect women,” Hill said, “because we now have the chance to start to reverse-engineer the actual complex system of interacting genes and proteins that drive clinical outcomes.”
MIT Music and Theater Arts fondly remembers the legacy of Professor Emerita Jeanne Shapiro Bamberger, who passed away peacefully at home in Berkeley, California, of natural causes on Dec. 12, 2024 at the age of 100. For three decades at the Institute, Bamberger found ways to use computers to engage students and help them learn music. A trained pianist who became fascinated with the idea of using technology to gain insights into music education, Bamberger ultimately helped to change how music was
MIT Music and Theater Arts fondly remembers the legacy of Professor Emerita Jeanne Shapiro Bamberger, who passed away peacefully at home in Berkeley, California, of natural causes on Dec. 12, 2024 at the age of 100.
For three decades at the Institute, Bamberger found ways to use computers to engage students and help them learn music. A trained pianist who became fascinated with the idea of using technology to gain insights into music education, Bamberger ultimately helped to change how music was taught at MIT and elsewhere.
Bamberger was born on Feb. 11, 1924 in Minneapolis, Minnesota. Her mother, Gertrude Shapiro (nee Kulberg), from a Romanian Jewish family, studied child psychology and was active in the League of Women Voters. Her father, Morse Shapiro, of Lithuanian and Polish Jewish heritage, was a groundbreaking pediatric cardiologist.
In 1969, Bamberger began her 32-year career at MIT, initially in the former MIT Education Department. While at MIT, Bamberger became the first woman to earn tenure in the Music and Theater Arts Section. She was know for pioneering the use of computer languages to teach children to learn music. She also used her computer innovations to study how children — and by extension, all humans — learn music, and this vector in particular became her life's work.
Ahead of her time, Bamberger worked in the MIT Artificial Intelligence Lab in the 1980s and developed computer languages (MusicLogo and Impromptu) while at the MIT Division for Study and Research in Education from 1975 to 1995. She became associate professor in music and theater arts in 1981, earned tenure soon thereafter, and chaired the department in 1989-90. During this period, she continued to perform as a concert pianist, taking part in concerts with the MIT Symphony Orchestra, and actively playing chamber music both at MIT and in the community. She also taught at the Harvard University Department of Education.
Institute Professor Marcus Thompson recollects, “During her time with us as a senior professor she was clearly a jewel in the crown. For someone who had studied piano with an historic legend in Artur Schnabel, who had studied with and known at least one of the French Six, Darius Milhaud, and worked with French composer and conductor Pierre Boulez, she was among that group of our professors who continually advocated for a new music building, considered the possibility of a graduate program in music at a time when we were being pushed to grow, at a time when she was our only senior woman when the need to do better was finally seen.” Both the dedicated music building and the graduate music program are now a reality.
Bamberger loved her work and was beloved and admired by her students and colleagues. Kenan Sahin Distinguished Professor Evan Ziporyn shares that she “was very much a shaping presence for our section — MIT Music and Theater Arts wouldn't be what we are today without her contributions. She’s also just a very cool person — I mean, how many 90-year-old academics end up working with Herbie Hancock and taking their research to the White House?”
Ziporyn adds that “among 7 million other singular accomplishments,” Bamberger published numerous articles and books including “The Art of Listening”with Howard Brofsky, “The Mind Behind the Musical Ear,” “Developing Musical Intuitions,” and “Discovering the Musical Mind.”
While at MIT, Bamberger took many students under her wing and assisted many more with their academic careers. Elaine Chew SM ’98, PhD ’00, an operations researcher, pianist, current professor of engineering at King’s College London, and mentee of Bamberger, says, “I would not be doing what I am today if not for Jeanne. A child prodigy turned music philosopher, Jeanne was a pioneer in music and AI long before it was fashionable. She was deeply interested in people and passionate about how we learn. I will not forget the day when I came to her with complaints about things not working. Rather than telling me what to do, Jeanne said, ‘What are you going to do about it?’ prompting me to reflect on and develop my own sense of agency.” (Chew speaks more on Bamberger’s inspirational role in a 2016 interview.)
All told, Bamberger had a creative, fertile mind and loved to ask probing questions, a quality she passed to her progeny and community — it was her excitement and her passion.
While a professor at MIT, Bamberger was a force to be reckoned with. In addition to her long and productive academic career — in which she published four books and nearly 20 book chapters — she was politically active and supported the anti-Vietnam war and the civil rights movements. She continued teaching and publishing her work well into her 90s and had a strong community of companions and colleagues to the end.
In 2002, Bamberger became professor emerita at MIT and moved to Berkeley, California, continuing to teach in the Music Department at the University of California at Berkeley.
At 100, she was predeceased by her former husband, Frank K. Bamberger. She is survived by her two sons, Joshua and Paul (Chip); four grandchildren — Jerehme, Kaela, Eli, and Noah; and many caring relatives and friends.
Computational neuroscientist and singer/songwriter Kimaya (Kimy) Lecamwasam, who also plays electric bass and guitar, says music has been a core part of her life for as long as she can remember. She grew up in a musical family and played in bands all through high school.“For most of my life, writing and playing music was the clearest way I had to express myself,” says Lecamwasam. “I was a really shy and anxious kid, and I struggled with speaking up for myself. Over time, composing and performing
Computational neuroscientist and singer/songwriter Kimaya (Kimy) Lecamwasam, who also plays electric bass and guitar, says music has been a core part of her life for as long as she can remember. She grew up in a musical family and played in bands all through high school.
“For most of my life, writing and playing music was the clearest way I had to express myself,” says Lecamwasam. “I was a really shy and anxious kid, and I struggled with speaking up for myself. Over time, composing and performing music became central to both how I communicated and to how I managed my own mental health.”
Along with equipping her with valuable skills and experiences, she credits her passion for music as the catalyst for her interest in neuroscience.
“I got to see firsthand not only the ways that audiences reacted to music, but also how much value music had for musicians,” she says. “That close connection between making music and feeling well is what first pushed me to ask why music has such a powerful hold on us, and eventually led me to study the science behind it.”
Lecamwasam earned a bachelor’s degree in 2021 from Wellesley College, where she studied neuroscience — specifically in the Systems and Computational Neuroscience track — and also music. During her first semester, she took a class in songwriting that she says made her more aware of the connections between music and emotions. While studying at Wellesley, she participated in the MIT Undergraduate Research Opportunities Program for three years. Working in the Department of Brain and Cognitive Sciences lab of Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, she focused primarily on classifying consciousness in anesthetized patients and training brain-computer interface-enabled prosthetics using reinforcement learning.
“I still had a really deep love for music, which I was pursuing in parallel to all of my neuroscience work, but I really wanted to try to find a way to combine both of those things in grad school,” says Lecamwasam. Brown recommended that she look into the graduate programs at the MIT Media Lab within the Program in Media Arts and Sciences (MAS), which turned out to be an ideal fit.
“One thing I really love about where I am is that I get to be both an artist and a scientist,” says Lecamwasam. “That was something that was important to me when I was picking a graduate program. I wanted to make sure that I was going to be able to do work that was really rigorous, validated, and important, but also get to do cool, creative explorations and actually put the research that I was doing into practice in different ways.”
Exploring the physical, mental, and emotional impacts of music
Informed by her years of neuroscience research as an undergraduate and her passion for music, Lecamwasam focused her graduate research on harnessing the emotional potency of music into scalable, non-pharmacological mental health tools. Her master’s thesis focused on “pharmamusicology,” looking at how music might positively affect the physiology and psychology of those with anxiety.
The overarching theme of Lecamwasam’s research is exploring the various impacts of music and affective computing — physically, mentally, and emotionally. Now in the third year of her doctoral program in the Opera of the Future group, she is currently investigating the impact of large-scale live music and concert experiences on the mental health and well-being of both audience members and performers. She is also working to clinically validate music listening, composition, and performance as health interventions, in combination with psychotherapy and pharmaceutical interventions.
Her recent work, in collaboration with Professor Anna Huang’s Human-AI Resonance Lab, assesses the emotional resonance of AI-generated music compared to human-composed music; the aim is to identify more ethical applications of emotion-sensitive music generation and recommendation that preserve human creativity and agency, and can also be used as health interventions. She has co-led a wellness and music workshop at the Wellbeing Summit in Bilbao, Spain, and has presented her work at the 2023 CHI conference on Human Factors in Computing Systems in Hamburg, Germany and the 2024 Audio Mostly conference in Milan, Italy.
Lecamwasam has collaborated with organizations near and far to implement real-world applications of her research. She worked with Carnegie Hall's Weill Music Institute on its Well-Being Concerts and is currently partnering on a study assessing the impact of lullaby writing on perinatal health with the North Shore Lullaby Project in Massachusetts, an offshoot of Carnegie Hall’s Lullaby Project. Her main international collaboration is with a company called Myndstream, working on projects comparing the emotional resonance of AI-generated music to human-composed music and thinking of clinical and real-world applications. She is also working on a project with the companies PixMob and Empatica (an MIT Media Lab spinoff), centered on assessing the impact of interactive lighting and large-scale live music experiences on emotional resonance in stadium and arena settings.
Building community
“Kimy combines a deep love for — and sophisticated knowledge of — music with scientific curiosity and rigor in ways that represent the Media Lab/MAS spirit at its best,” says Professor Tod Machover, Lecamwasam’s research advisor, Media Lab faculty director, and director of the Opera of the Future group. “She has long believed that music is one of the most powerful and effective ways to create personalized interventions to help stabilize emotional distress and promote empathy and connection. It is this same desire to establish sane, safe, and sustaining environments for work and play that has led Kimy to become one of the most effective and devoted community-builders at the lab.”
Lecamwasam has participated in the SOS (Students Offering Support) program in MAS for a few years, which assists students from a variety of life experiences and backgrounds during the process of applying to the Program in Media Arts and Sciences. She will soon be the first MAS peer mentor as part of a new initiative through which she will establish and coordinate programs including a “buddy system,” pairing incoming master’s students with PhD students as a way to help them transition into graduate student life at MIT. She is also part of the Media Lab’s Studcom, a student-run organization that promotes, facilitates, and creates experiences meant to bring the community together.
“I think everything that I have gotten to do has been so supported by the friends I’ve made in my lab and department, as well as across departments,” says Lecamwasam. “I think everyone is just really excited about the work that they do and so supportive of one another. It makes it so that even when things are challenging or difficult, I’m motivated to do this work and be a part of this community.”
As a musician, Kimaya Lecamwasam says, “that close connection between making music and feeling well is what first pushed me to ask why music has such a powerful hold on us, and eventually led me to study the science behind it.”
A team of researchers, led by Cambridge University, has now formulated a method to assess whether carbon removal portfolios can help limit global warming over centuries.
The approach also distinguishes between buying credits to offset risk versus claiming net-negative emissions.
The study paves the way for nature-based carbon removal projects – such as planting new forests or restoring existing ones – to become effective climate change solutions when balanced with a portfolio of other removal
A team of researchers, led by Cambridge University, has now formulated a method to assess whether carbon removal portfolios can help limit global warming over centuries.
The approach also distinguishes between buying credits to offset risk versus claiming net-negative emissions.
The study paves the way for nature-based carbon removal projects – such as planting new forests or restoring existing ones – to become effective climate change solutions when balanced with a portfolio of other removal techniques, according to researchers.
They say the findings, published in the journal Joule, show how nature-based and technology-based carbon storage solutions can work together through the transition to net zero, challenging the notion that only permanent tech-based “geological storage” can effectively tackle climate change.
The study’s authors point out that some carbon removal portfolios, such as California’s forest carbon offsets programme, may be severely underfunded for risks beyond the next few decades.
They call for a “buffer” of around two tonnes of stored carbon for every tonne offset in portfolios containing nature-based solutions, noting that this is “sufficient in most cases” to manage long-term risks.
However, researchers say the most high-risk portfolios that rely heavily on nature-based offsetting might need extreme buffers of nine tonnes of carbon removed for every tonne emitted. The authors caution against the use of such portfolios given the costs and uncertainties involved.
“Tech giants like Microsoft and Meta are collectively spending billions on carbon removal portfolios to offset their growing carbon footprints,” said lead author Dr Conor Hickey, Assistant Professor in Energy and Climate at Cambridge University’s Department of Land Economy.
“While companies and countries agree that increased investment in carbon removal is essential to reach net zero targets, they also want to understand whether carbon removal schemes can help stabilise global temperatures over the long term.”
“Our risk management approach offers one of the first reliable measures for portfolio managers targeting long-term temperature stabilisation,” said Hickey. “It shows that nature-based carbon storage such as tree planting has a bigger role to play than critics assume when used as part of a diversified carbon removal portfolio.”
“Durable net zero means geological net zero,” said Professor Myles Allen, a co-author on the paper and Professor of Geosystem Science at the University of Oxford. “To stabilise climate in line with Paris Agreement goals, anyone still relying on offsets must plan to shift entirely to carbon dioxide removal with geological storage by the middle of the century.”
Current market incentives favour cheaper and more available ‘biological’ projects to pull carbon dioxide (CO₂) from the atmosphere and store it, such as forestry, which locks carbon in trees, or biochar, where plant materials are heated to create a charcoal-like substance that traps carbon when incorporated into soil.
However, these methods carry a higher risk of carbon re-release, such as when land use changes or wildfires increase. They are often considered only a temporary solution – the carbon is not locked away for long enough to stem rising global temperatures.
Alternative tech-based solutions like Direct Air Capture (DAC) are proving hard to grow at scale when costs remain high and the process energy-intensive. Yet the permanence of the carbon storage means this emerging technology is less vulnerable to reversal, such as through leakage. DAC can be combined with deep underground storage to lock the CO₂ away.
For the latest study, the research team have developed a new “risk management framework” to accurately calculate the additional CO₂ removal needed to keep temperatures stable over centuries for various storage portfolios.
Their analysis shows that in some cases, such as a high-risk portfolio dominated by forestry projects, the extra amount of CO₂ removal needed to make up for this risk doesn’t change much – whether the timescale is 300 or even 1,000 years.
“Removing more carbon now can effectively cover carbon storage risk for centuries, and this can be done with a mix of nature and tech, as long as the right buffers are built in,” said Hickey.
“Portfolios can combine expensive permanent solutions like DAC with lower-cost nature-based options like planting trees – matching society's willingness to pay while still contributing to temperature stabilisation goals.”
“Our approach enables strategic carbon storage choices based on current availability, while targeting long-term temperature stabilisation. It provides buyer flexibility while valuing lower-risk storage options, something today's market lacks,” said Hickey.
By 2050, the UK aims to achieve net zero, with geological storage expected to play a major role in storing any ongoing CO₂ emissions. Incoming UK and EU guidance states that projects must be subject to a minimum 200-year permanence requirement.
Research on a ‘portfolio approach’ to carbon removal enables firms to mix expensive tech-based solutions that inject carbon deep underground with lower-cost and currently more available nature-based options, such as forests and biochar.
Removing more carbon now can effectively cover carbon storage risk for centuries
Skinnider's work uses AI to identify chemical structures known as “metabolites” and discover how they influence health. Two Princeton graduate alums and a former postdoc are also among the 20 new Packard Fellows.
Skinnider's work uses AI to identify chemical structures known as “metabolites” and discover how they influence health. Two Princeton graduate alums and a former postdoc are also among the 20 new Packard Fellows.
The nearly £500 million fund will make it possible for the 79 UK colleges, universities and other institutions involved in the coalition – formed by the Banking Engagement Forum based in the Dept of Land Economy at the University of Cambridge – to make short-term cash-like investments without contributing to fossil fuel expansion within capital debt markets.
“This is the first cash fund we know of that will avoid providing liquidity to financial institutions who continue to finance companies th
The nearly £500 million fund will make it possible for the 79 UK colleges, universities and other institutions involved in the coalition – formed by the Banking Engagement Forum based in the Dept of Land Economy at the University of Cambridge – to make short-term cash-like investments without contributing to fossil fuel expansion within capital debt markets.
“This is the first cash fund we know of that will avoid providing liquidity to financial institutions who continue to finance companies that are building new infrastructure, such as coal- and gas-fired power plants, which will lock in fossil fuel combustion for decades,” University of Cambridge Chief Financial Officer Anthony Odgers said.
The new ‘quasi-money market fund’ is part of a broader movement towards climate-conscious investing, appealing to a diverse range of investors including universities, local authorities, pension funds, insurers, and others with substantial cash to invest and committed to doing so responsibly.
The fund will filter out fossil fuel companies, utilities, banks, insurers, and other companies that contribute to fossil fuel expansion. Companies that are excluded from the list can be readmitted if they stop engaging in or facilitating fossil fuel expansion.
The HEI coalition has indicated they collectively expect to invest in the first instance close to £500 million in the product. The fund is expected to launch towards the end of 2025, with more seed investors also expected to join prior to launch.
Coalition members include the University of Oxford, London School of Economics, University of Edinburgh, University College London and 75 other leading UK institutions.
“This initiative offers a practical and credible path for aligning our financial decisions with our climate commitments and institutional values. This provides a solution to institutions that is wider than the higher education sector and which will hopefully act as a catalyst to concrete change," Oxford Group Treasurer Sean Anderson said.
Amundi is a leading European asset manager, which manages more than €2.2 trillion of assets.
“At Amundi we are committed to the view that delivering strong stewardship as well as expert responsible investment solutions will facilitate the transition to an inclusive, low carbon economy while delivering stable, long term sustainable value for clients. This product, developed for the UK’s leading universities and higher education institutions, reflects a growing recognition among UK investors of the importance of these efforts in supporting long-term social, environmental and economic benefits,” said Jean-Jacques Barbéris Head of Institutional & Corporate Clients Division and ESG at Amundi.
A Cambridge-led coalition of UK Higher Education Institutions (HEIs) has selected asset manager Amundi Investment Solutions to create a cash fund that excludes companies contributing to fossil fuel expansion globally.
This is the first cash fund we know of that will avoid providing liquidity to financial institutions who continue to finance companies that are building new infrastructure
The Energy Science Center at ETH Zurich is celebrating its 20th anniversary. Executive Director Christian Schaffner sits down with us to take a look back at the Center’s beginnings, the key moments, and the current challenges. He also explains why the upcoming Energy Week is more than just a professional event.
The Energy Science Center at ETH Zurich is celebrating its 20th anniversary. Executive Director Christian Schaffner sits down with us to take a look back at the Center’s beginnings, the key moments, and the current challenges. He also explains why the upcoming Energy Week is more than just a professional event.
People tend to think of quantum materials — whose properties arise from quantum mechanical effects — as exotic curiosities. But some quantum materials have become a ubiquitous part of our computer hard drives, TV screens, and medical devices. Still, the vast majority of quantum materials never accomplish much outside of the lab.What makes certain quantum materials commercial successes and others commercially irrelevant? If researchers knew, they could direct their efforts toward more promising m
People tend to think of quantum materials — whose properties arise from quantum mechanical effects — as exotic curiosities. But some quantum materials have become a ubiquitous part of our computer hard drives, TV screens, and medical devices. Still, the vast majority of quantum materials never accomplish much outside of the lab.
What makes certain quantum materials commercial successes and others commercially irrelevant? If researchers knew, they could direct their efforts toward more promising materials — a big deal since they may spend years studying a single material.
Now, MIT researchers have developed a system for evaluating the scale-up potential of quantum materials. Their framework combines a material’s quantum behavior with its cost, supply chain resilience, environmental footprint, and other factors. The researchers used their framework to evaluate over 16,000 materials, finding that the materials with the highest quantum fluctuation in the centers of their electrons also tend to be more expensive and environmentally damaging. The researchers also identified a set of materials that achieve a balance between quantum functionality and sustainability for further study.
The team hopes their approach will help guide the development of more commercially viable quantum materials that could be used for next generation microelectronics, energy harvesting applications, medical diagnostics, and more.
“People studying quantum materials are very focused on their properties and quantum mechanics,” says Mingda Li, associate professor of nuclear science and engineering and the senior author of the work. “For some reason, they have a natural resistance during fundamental materials research to thinking about the costs and other factors. Some told me they think those factors are too ‘soft’ or not related to science. But I think within 10 years, people will routinely be thinking about cost and environmental impact at every stage of development.”
The paper appears in Materials Today. Joining Li on the paper are co-first authors and PhD students Artittaya Boonkird, Mouyang Cheng, and Abhijatmedhi Chotrattanapituk, along with PhD students Denisse Cordova Carrizales and Ryotaro Okabe; former graduate research assistants Thanh Nguyen and Nathan Drucker; postdoc Manasi Mandal; Instructor Ellan Spero of the Department of Materials Science and Engineering (DMSE); Professor Christine Ortiz of the Department of DMSE; Professor Liang Fu of the Department of Physics; Professor Tomas Palacios of the Department of Electrical Engineering and Computer Science (EECS); Associate Professor Farnaz Niroui of EECS; Assistant Professor Jingjie Yeo of Cornell University; and PhD student Vsevolod Belosevich and Assostant Professor Qiong Ma of Boston College.
Materials with impact
Cheng and Boonkird say that materials science researchers often gravitate toward quantum materials with the most exotic quantum properties rather than the ones most likely to be used in products that change the world.
“Researchers don’t always think about the costs or environmental impacts of the materials they study,” Cheng says. “But those factors can make them impossible to do anything with.”
Li and his collaborators wanted to help researchers focus on quantum materials with more potential to be adopted by industry. For this study, they developed methods for evaluating factors like the materials’ price and environmental impact using their elements and common practices for mining and processing those elements. At the same time, they quantified the materials’ level of “quantumness” using an AI model created by the same group last year, based on a concept proposed by MIT professor of physics Liang Fu, termed quantum weight.
“For a long time, it’s been unclear how to quantify the quantumness of a material,” Fu says. “Quantum weight is very useful for this purpose. Basically, the higher the quantum weight of a material, the more quantum it is.”
The researchers focused on a class of quantum materials with exotic electronic properties known as topological materials, eventually assigning over 16,000 materials scores on environmental impact, price, import resilience, and more.
For the first time, the researchers found a strong correlation between the material’s quantum weight and how expensive and environmentally damaging it is.
“That’s useful information because the industry really wants something very low-cost,” Spero says. “We know what we should be looking for: high quantum weight, low-cost materials. Very few materials being developed meet that criteria, and that likely explains why they don’t scale to industry.”
The researchers identified 200 environmentally sustainable materials and further refined the list down to 31 material candidates that achieved an optimal balance of quantum functionality and high-potential impact.
The researchers also found that several widely studied materials exhibit high environmental impact scores, indicating they will be hard to scale sustainably. “Considering the scalability of manufacturing and environmental availability and impact is critical to ensuring practical adoption of these materials in emerging technologies,” says Niroui.
Guiding research
Many of the topological materials evaluated in the paper have never been synthesized, which limited the accuracy of the study’s environmental and cost predictions. But the authors say the researchers are already working with companies to study some of the promising materials identified in the paper.
“We talked with people at semiconductor companies that said some of these materials were really interesting to them, and our chemist collaborators also identified some materials they find really interesting through this work,” Palacios says. “Now we want to experimentally study these cheaper topological materials to understand their performance better.”
“Solar cells have an efficiency limit of 34 percent, but many topological materials have a theoretical limit of 89 percent. Plus, you can harvest energy across all electromagnetic bands, including our body heat,” Fu says. “If we could reach those limits, you could easily charge your cell phone using body heat. These are performances that have been demonstrated in labs, but could never scale up. That’s the kind of thing we’re trying to push forward."
This work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.
MIT researchers have developed a system for evaluating the scale-up potential of quantum materials. Their data-driven framework combines a material’s quantum behavior with its cost, supply chain resilience, environmental footprint, and other factors.
There was excitement in the air as 26 students from the NUS Department of Geography’s Field Investigation in Physical Geography course got ready for a journey that would take them far beyond textbooks and lecture halls.For many, Africa had always felt like a wonderland of vast savannahs, dramatic mountain ranges and wildlife seen only on screen. But this was no safari holiday. Over 11 days, students from NUS Geography and the Environmental Studies (BES) programme would traverse Kenya’s legendary
There was excitement in the air as 26 students from the NUS Department of Geography’sField Investigation in Physical Geography course got ready for a journey that would take them far beyond textbooks and lecture halls.
For many, Africa had always felt like a wonderland of vast savannahs, dramatic mountain ranges and wildlife seen only on screen. But this was no safari holiday. Over 11 days, students from NUS Geography and the Environmental Studies (BES) programme would traverse Kenya’s legendary landscapes, immersing themselves in the realities of environmental change, human-wildlife interactions, and the deep connections between people and place.
Far beyond teaching standard field techniques, the GE4220 course centres on understanding one of the most dynamic landscapes on Earth through the multiple lenses of earth system sciences, environmental change, people and land use, human-wildlife interactions, and biodiversity conservation.
Head of NUS Geography Professor David Taylor, who has been leading such trips for more than two decades, said the trip aimed to apply field techniques, study environmental change, and understand the deep interconnections between people, land, wildlife and climate.
“Students are taught that it is impossible to truly understand physical environments without also considering humans and their role in shaping them. Coming to terms with the diversity of environmental conditions here requires understanding the deep interconnections between people and place,” said Prof Taylor.
Fieldwork, Kenyan-style
In Kenya, the students journeyed across the Great Rift Valley – from Nairobi to Naivasha, Baringo, Bogoria, Nakuru, and Laikipia. Each day was filled with site visits, wildlife encounters, and project work, which included conducting and filming community interviews as well as engaging in group discussions.
A highlight of the trip was a sediment coring exercise in a Rift Valley lake. Students assisted the researchers as they set out on a small boat, carefully deploying specialised equipment to extract a long column of lakebed sediment. The process, equal parts science demonstration and action scene, offered a rare glimpse into how past climates and environmental change can be reconstructed from the layers beneath the water. Between the rigours of scientific work, there were moments of wonder too: students witnessed flocks of pink flamingoes gathering nearby, their bright plumage adding an unforgettable splash of colour to the fieldwork setting.
Year 4 Geography student Pang Kah Wing reflected, “We learnt that sediments form a kind of history book of the lake’s climate and environment. But seeing the process in real time made me admire how much coordination and effort it takes. Research is exciting, but also tough and a little dangerous!”
The Rift Valley’s lakes were a key focal point throughout the expedition. From Lakes Naivasha to Baringo, Bogoria, and Nakuru, students were introduced to the diversity of lake systems – freshwater and saline, deep and shallow, and the communities that depend on them for their livelihoods. These lakes are not just ecological treasures; they are a source of life. Populations cluster around them for fishing, farming and water supply. Yet, in recent decades, rising water levels linked to climate change have reshaped both landscape and livelihoods, flooding villages, displacing families, and triggering profound hydrological changes. Students saw first-hand how these environmental shifts complicate already fragile relationships between people, land and wildlife.
Beyond technical skills and knowledge, the students also got a glimpse of Kenyan life. Having the opportunity to interview locals about education, Year 4 BES undergraduate Shaylie Yu was awed by the lengths families would go to secure schooling for their children – even selling cattle to pay school fees. “It was inspiring to learn how deeply education is valued,” Shaylie shared.
Lessons beyond the classroom that reshape perspectives
In addition to the enriching experiences gleaned from the fieldwork and field interviews, the students were often treated to wildlife encounters – lions, hyenas, giraffes and zebras were spotted, often within breathtakingly close range.
For some, the trip’s biggest impact was not visual but mental. Year 4 Geography student Chloe Lee admitted that before the trip, she assumed Singapore’s quality of life was unquestionably better than Kenya’s. But one conversation completely shifted her perspective. A local had asked with genuine concern, “You don’t have land in Singapore? Where do you build your house? Grow crops? Raise livestock? We are so lucky to live so well in Kenya.”
That interaction challenged Chloe’s worldview and made her reflect on how differently Singaporeans perceive value and well-being.
These moments, often unplanned, make field courses like GE4220 a transformative experience. Prof Taylor reflected, “Eastern Africa is the cradle of human evolution. I think we all carry a bit of Africa within ourselves. In that sense, going to Africa feels like going home.”
From fieldwork to fresh ways of thinking and understanding
Back in Singapore, each project group presented their findings through videos and presentations, raising issues that ranged from the impacts of land-use change on pastoralists to the role of religion in supporting conservation. The diversity of perspectives was as enriching as the fieldwork itself.
“It was eye-opening to see how everyone approached the same landscapes from different angles,” Shaylie reflected.
From wildlife encounters to ecosystem studies, from campfire conversations to candid insights shared by local communities, the Kenya field trip offered a rare blend of academic rigour, cross-cultural exchange, and once-in-a-lifetime experiences.
For the students, what began as a dream of “seeing Africa” transformed into a deeper understanding of how people and environment are inseparably linked – and how lessons from afar can inspire new ways of thinking back home.
By the Department of Geography at the NUS Faculty of Arts and Social Sciences
Elaine Buckberg in her EV.Stephanie Mitchell/Harvard Staff Photographer
Work & Economy
Want Americans to love EVs? Fix this.
U.S. consumers aren’t thinking about daily needs when buying a car, says former GM chief economist
Anna Lamb
Harvard Staff Writer
October 14, 2025
5 min read
Compared to gasoline-powered cars, electric vehicles can save money at the pump, produce less harmful em
U.S. consumers aren’t thinking about daily needs when buying a car, says former GM chief economist
Anna Lamb
Harvard Staff Writer
5 min read
Compared to gasoline-powered cars, electric vehicles can save money at the pump, produce less harmful emissions, and avoid high maintenance costs. Why then, are so few Americans switching to EVs? The main reason, according to Elaine Buckberg, senior fellow at Harvard’s Salata Institute for Climate and Sustainability, is the hassle of charging on the go.
In a recent talk at Harvard Kennedy School, Buckberg, a former chief economist for General Motors, explored the realities of commuting with an electric vehicle, and why the lack of charging infrastructure is preventing the shift in consumer behavior needed to reduce transportation emissions.
Transportation accounts for 28 percent of U.S. greenhouse gas emissions, and light-duty vehicles like passenger cars account for a total of 17 percent.
“We need to fix charging to sell EVs, to take internal combustion engine cars off the road over time and avoid emissions,” Buckberg said.
Realities for drivers
According to Buckberg, most electric vehicles on the market have a range of 230 miles or more on a single charge. For most Americans, that’s an average week of driving.
But consumers aren’t thinking about their daily, or even weekly, habits when considering making the switch to an electric vehicle, said Buckberg. What car buyers want is a vehicle that can reliably take them on a cross-country trip should the need arise.
“There’s no one-stop app to find chargers, which means it can be hard to find out whether a charger is working. Who’s responsible? Who do you tell when it’s broken? So they don’t get fixed.”
Elaine Buckberg
“What if some weekend, they needed to take both kids and drop them off in different places, one at a soccer match and one had, I don’t know, dance competition, and they were both long-distance,” she said. “That might be a problem.”
At present, Buckberg said, most EV owners are also homeowners with access to at-home charging. But public infrastructure — which would be crucial to expand EV ownership — is still spotty.
“So people who can’t charge at home, or maybe can’t also charge in their workplace, they’re really going to rely on public charging,” she said. “That’s generally city dwellers and renters.”
Publicly available chargers — including both Level 2, which take roughly three to five hours to reach a full charge, and fast chargers that can reach a nearly full charge in around 20 minutes — currently sit at around 61,000 across the country, according to Buckberg. That’s up from 46,000 last year.
“But not nearly keeping up with the stock of EVs,” she added.
For comparison, there are more than double the amount of gas-fueling stations in the U.S. — many with multiple pumps for use.
Buckberg added that there is little reliability in a charger working, and a lack of real-time data for drivers. According to her research, only 34 percent of charging stations share real-time data that apps like Chargepoint and PlugShare use to inform their users on where to plug in. On some of the highways her team has studied, the gaps in available data exceed 1,300 miles.
“There’s no one-stop app to find chargers, which means it can be hard to find out whether a charger is working,” she said. “Who’s responsible? Who do you tell when it’s broken? So they don’t get fixed.”
Potential policy solutions
Currently, the federal government is not prioritizing increasing EV purchasing, as has been a trend in recent years, she said. It has cut incentive programs, including tax credits and spending programs, to improve availability of real-time charger data for drivers.
According to Buckberg, this investment is crucial to increasing the share of EVs on the road and reaping the environmental benefits. Her research shows that universal real-time data on highway fast chargers would raise new EV sales by 6.4 percentage points in 2030 — or 3.5 million additional EVs on the road in 2030 compared to the status quo. That would grow the overall share of EVs on the road by 15 percent and in turn create a reduction in emissions of about 15 million metric tons, she said.
“Data transparency could cure range anxiety,” she said in her talk. “Imagine that you could go to Google Maps, you could go to Apple Maps, you could go to whatever is the killer new EV app, and you could get reliable information.”
But there is still hope, she said. In partnership with scholars Ari Peskoe, Carrie Jenks, and Eliza Martin at Harvard Law School, Buckberg’s team has developed example legislation for state governments that would establish common standards for EV-charging providers to increase the reliability of their data.
“No expensive federal investment or tax credits,” she said. “It’s a really cheap option at a time where we’ve got pullback of federal incentives that would raise the number of registered EVs.”
And, Buckberg said, having states take the lead would reduce resistance for charging providers that have not made sharing data a priority.
“From a state perspective, this could be justified because it yields benefits at very low cost,” she said.
Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recent
Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.
Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recently in Science has found evidence that while healing occurs quickly at shallow depths — roughly above 10 km — deeper depths recover more slowly, if at all.
“If you were to look before and after in the shallow crust, you wouldn’t see any permanent change. But there’s this very permanent change that persists in the mid-crust,” says Jared Bryan, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author on the paper.
The paper’s other authors include EAPS Professor William Frank and Pascal Audet from the University of Ottawa.
Everything but the quakes
In order to assemble a full understanding of how the crust behaves before, during, and after an earthquake sequence, the researchers looked at seismic data from the 2019 Ridgecrest earthquakes in California. This immature fault zone experienced the largest earthquake in the state in 20 years, and tens of thousands of aftershocks over the following year. They then removed seismic data created by the sequence and only looked at waves generated by other seismic activity around the world to see how their paths through the Earth changed before and after the sequence.
“One person’s signal is another person’s noise,” says Bryan. They also used general ambient noise from sources like ocean waves and traffic that are also picked up by seismometers. Then, using a technique called a receiver function, they were able to see the speed of the waves as they traveled and how it changed due to conditions in the Earth such as rock density and porosity, much in the same way we use sonar to see how acoustic waves change when they interact with objects. With all this information, they were able to construct basic maps of the Earth around the Ridgecrest fault zone before and after the sequence.
What they found was that the shallow crust, extending about 10 km into the Earth, recovered over the course of a few months. In contrast, deeper depths in the mid-crust didn’t experience immediate damage, but rather changed over the same timescale as shallow depths recovered.
“What was surprising is that the healing in the shallow crust was so quick, and then you have this complementary accumulation occurring, not at the time of the earthquake, but instead over the post-seismic phase,” says Bryan.
Balancing the energy budget
Understanding how recovery plays out at different depths is crucial for determining how energy is spent during different parts of the seismic process, which includes activities such as the release of energy as waves, the creation of new fractures, or energy being stored elastically in the surrounding areas. Altogether, this is collectively known as the energy budget, and it is a useful tool for understanding how damage accumulates and recovers over time.
What remains unclear is the timescales at which deeper depths recover, if at all. The paper presents two possible scenarios to explain why that might be: one in which the deep crust recovers over a much longer timescale than they observed, or one where it never recovers at all.
“Either of those are not what we expected,” says Frank. “And both of them are interesting.”
Further research will require more observations to build out a more detailed picture to see at what depth the change becomes more pronounced. In addition, Bryan wants to look at other areas, such as more mature faults that experience higher levels of seismic activity, to see if it changes the results.
“We’ll let you know in 1,000 years whether it’s recovered,” says Bryan.
In 2019, a magnitude 7.1 earthquake kicked off a period of seismic activity in the region near Ridgecrest, California. New research has found that, while the Earth’s crust in the region recovered over a period of months, the deeper crust is still experiencing damage as it adjusts to the new distribution of energy in the region.
Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recent
Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.
Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recently in Science has found evidence that while healing occurs quickly at shallow depths — roughly above 10 km — deeper depths recover more slowly, if at all.
“If you were to look before and after in the shallow crust, you wouldn’t see any permanent change. But there’s this very permanent change that persists in the mid-crust,” says Jared Bryan, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author on the paper.
The paper’s other authors include EAPS Professor William Frank and Pascal Audet from the University of Ottawa.
Everything but the quakes
In order to assemble a full understanding of how the crust behaves before, during, and after an earthquake sequence, the researchers looked at seismic data from the 2019 Ridgecrest earthquakes in California. This immature fault zone experienced the largest earthquake in the state in 20 years, and tens of thousands of aftershocks over the following year. They then removed seismic data created by the sequence and only looked at waves generated by other seismic activity around the world to see how their paths through the Earth changed before and after the sequence.
“One person’s signal is another person’s noise,” says Bryan. They also used general ambient noise from sources like ocean waves and traffic that are also picked up by seismometers. Then, using a technique called a receiver function, they were able to see the speed of the waves as they traveled and how it changed due to conditions in the Earth such as rock density and porosity, much in the same way we use sonar to see how acoustic waves change when they interact with objects. With all this information, they were able to construct basic maps of the Earth around the Ridgecrest fault zone before and after the sequence.
What they found was that the shallow crust, extending about 10 km into the Earth, recovered over the course of a few months. In contrast, deeper depths in the mid-crust didn’t experience immediate damage, but rather changed over the same timescale as shallow depths recovered.
“What was surprising is that the healing in the shallow crust was so quick, and then you have this complementary accumulation occurring, not at the time of the earthquake, but instead over the post-seismic phase,” says Bryan.
Balancing the energy budget
Understanding how recovery plays out at different depths is crucial for determining how energy is spent during different parts of the seismic process, which includes activities such as the release of energy as waves, the creation of new fractures, or energy being stored elastically in the surrounding areas. Altogether, this is collectively known as the energy budget, and it is a useful tool for understanding how damage accumulates and recovers over time.
What remains unclear is the timescales at which deeper depths recover, if at all. The paper presents two possible scenarios to explain why that might be: one in which the deep crust recovers over a much longer timescale than they observed, or one where it never recovers at all.
“Either of those are not what we expected,” says Frank. “And both of them are interesting.”
Further research will require more observations to build out a more detailed picture to see at what depth the change becomes more pronounced. In addition, Bryan wants to look at other areas, such as more mature faults that experience higher levels of seismic activity, to see if it changes the results.
“We’ll let you know in 1,000 years whether it’s recovered,” says Bryan.
In 2019, a magnitude 7.1 earthquake kicked off a period of seismic activity in the region near Ridgecrest, California. New research has found that, while the Earth’s crust in the region recovered over a period of months, the deeper crust is still experiencing damage as it adjusts to the new distribution of energy in the region.
Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”As an undergraduate student, he watched as a researcher mounted a towering chromatography col
Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”
As an undergraduate student, he watched as a researcher mounted a towering chromatography column layered with colorful plant chemicals in a laboratory. When the researcher switched on a UV light, the colors turned into fluorescent shades of blue, green, red and pink. “I realized in that exact moment that I wanted to be the same person, separating new unknown compounds from a rare plant with potential pharmaceutical properties,” he recalls.
These experiences set him on a path from a master’s degree in organic chemistry to his current work as a postdoc in the MIT Department of Civil and Environmental Engineering, where he focuses on developing sustainable fertilizers and studying how rare earth elements can boost plant resilience, with the aim of reducing agriculture’s environmental impact.
In the lab of MIT Professor Benedetto Marelli, Rizzo studies plant responses to environmental stressors, such as heat, drought, and prolonged UV irradiation. This includes developing new fertilizers that can be applied as seed coating to help plants grow stronger and enhance their resistance.
“We are working on new formulations of fertilizers that aim to reduce the huge environmental impact of classical practices in agriculture based on NPK inorganic fertilizers,” Rizzo explains. Although they are fundamental to crop yields, their tendency to accumulate in soil is detrimental to the soil health and microbiome living in it. In addition, producing NPK (nitrogen, phosphorus, and potassium) fertilizers is one of the most energy-consuming and polluting chemical processes in the world.
“It is mandatory to reshape our conception of fertilizers and try to rely, at least in part, on alternative products that are safer, cheaper, and more sustainable,” he says.
Recently, Rizzo was awarded a Kavanaugh Fellowship, a program that gives MIT graduate students and postdocs entrepreneurial training and resources to bring their research from the lab to the market. “This prestigious fellowship will help me build a concrete product for a company, adding more value to our research,” he says.
Rizzo hopes their work will help farmers increase their crop yields without compromising soil quality or plant health. A major barrier to adopting new fertilizers is cost, as many farmers rely heavily on each growing season’s output and cannot risk investing in products that may underperform compared to traditional NPK fertilizers. The fertilizers being developed in the Marelli Lab address this challenge by using chitin and chitosan, abundant natural materials that make them far less expensive to produce, which Rizzo hopes will encourage farmers to try them.
“Through the Kavanaugh Fellowship, I will spend this year trying to bring the technology outside the lab to impact the world and meet the need for farmers to support their prosperity,” he says.
Mentorship has been a defining part of his postdoc experience. Rizzo describes Professor Benedetto Marelli as “an incredible mentor” who values his research interests and supports him through every stage of his work. The lab spans a wide range of projects — from plant growth enhancement and precision chemical delivery to wastewater treatment, vaccine development for fish, and advanced biochemical processes. “My colleagues created a stimulant environment with different research topics,” he notes. He is also grateful for the work he does with international institutions, which has helped him build a network of researchers and academics around the world.
Rizzo enjoys the opportunity to mentor students in the lab and appreciates their curiosity and willingness to learn. “It is one of the greatest qualities you can have as a scientist because you must be driven by curiosity to discover the unexpected,” he says.
He describes MIT as a “dynamic and stimulating experience,” but also acknowledges how overwhelming it can be. “You will feel like a small fish in a big ocean,” he says. “But that is exactly what MIT is: an ocean full of opportunities and challenges that are waiting to be solved.”
Beyond his professional work, Rizzo enjoys nature and the arts. An avid reader, he balances his scientific work with literature and history. “I never read about science-related topics — I read about it a lot already for my job,” he says. “I like classic literature, novels, essays, history of nations, and biographies. Often you can find me wandering in museums’ art collections.” Classical art, Renaissance, and Pre-Raphaelites are his favorite artistic currents.
Looking ahead, Rizzo hopes to shift his professional pathway toward startups or companies focused on agrotechnical improvement. His immediate goal is to contribute to initiatives where research has a direct, tangible impact on everyday life.
“I want to pursue the option of being part of a spinout process that would enable my research to have a direct impact in everyday life and help solve agricultural issues,” he adds.
In the Marelli Lab at MIT, Giorgio Rizzo develops sustainable seed coatings using natural materials to boost plant resilience and reduce the environmental impact of traditional fertilizers.
Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”As an undergraduate student, he watched as a researcher mounted a towering chromatography col
Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”
As an undergraduate student, he watched as a researcher mounted a towering chromatography column layered with colorful plant chemicals in a laboratory. When the researcher switched on a UV light, the colors turned into fluorescent shades of blue, green, red and pink. “I realized in that exact moment that I wanted to be the same person, separating new unknown compounds from a rare plant with potential pharmaceutical properties,” he recalls.
These experiences set him on a path from a master’s degree in organic chemistry to his current work as a postdoc in the MIT Department of Civil and Environmental Engineering, where he focuses on developing sustainable fertilizers and studying how rare earth elements can boost plant resilience, with the aim of reducing agriculture’s environmental impact.
In the lab of MIT Professor Benedetto Marelli, Rizzo studies plant responses to environmental stressors, such as heat, drought, and prolonged UV irradiation. This includes developing new fertilizers that can be applied as seed coating to help plants grow stronger and enhance their resistance.
“We are working on new formulations of fertilizers that aim to reduce the huge environmental impact of classical practices in agriculture based on NPK inorganic fertilizers,” Rizzo explains. Although they are fundamental to crop yields, their tendency to accumulate in soil is detrimental to the soil health and microbiome living in it. In addition, producing NPK (nitrogen, phosphorus, and potassium) fertilizers is one of the most energy-consuming and polluting chemical processes in the world.
“It is mandatory to reshape our conception of fertilizers and try to rely, at least in part, on alternative products that are safer, cheaper, and more sustainable,” he says.
Recently, Rizzo was awarded a Kavanaugh Fellowship, a program that gives MIT graduate students and postdocs entrepreneurial training and resources to bring their research from the lab to the market. “This prestigious fellowship will help me build a concrete product for a company, adding more value to our research,” he says.
Rizzo hopes their work will help farmers increase their crop yields without compromising soil quality or plant health. A major barrier to adopting new fertilizers is cost, as many farmers rely heavily on each growing season’s output and cannot risk investing in products that may underperform compared to traditional NPK fertilizers. The fertilizers being developed in the Marelli Lab address this challenge by using chitin and chitosan, abundant natural materials that make them far less expensive to produce, which Rizzo hopes will encourage farmers to try them.
“Through the Kavanaugh Fellowship, I will spend this year trying to bring the technology outside the lab to impact the world and meet the need for farmers to support their prosperity,” he says.
Mentorship has been a defining part of his postdoc experience. Rizzo describes Professor Benedetto Marelli as “an incredible mentor” who values his research interests and supports him through every stage of his work. The lab spans a wide range of projects — from plant growth enhancement and precision chemical delivery to wastewater treatment, vaccine development for fish, and advanced biochemical processes. “My colleagues created a stimulant environment with different research topics,” he notes. He is also grateful for the work he does with international institutions, which has helped him build a network of researchers and academics around the world.
Rizzo enjoys the opportunity to mentor students in the lab and appreciates their curiosity and willingness to learn. “It is one of the greatest qualities you can have as a scientist because you must be driven by curiosity to discover the unexpected,” he says.
He describes MIT as a “dynamic and stimulating experience,” but also acknowledges how overwhelming it can be. “You will feel like a small fish in a big ocean,” he says. “But that is exactly what MIT is: an ocean full of opportunities and challenges that are waiting to be solved.”
Beyond his professional work, Rizzo enjoys nature and the arts. An avid reader, he balances his scientific work with literature and history. “I never read about science-related topics — I read about it a lot already for my job,” he says. “I like classic literature, novels, essays, history of nations, and biographies. Often you can find me wandering in museums’ art collections.” Classical art, Renaissance, and Pre-Raphaelites are his favorite artistic currents.
Looking ahead, Rizzo hopes to shift his professional pathway toward startups or companies focused on agrotechnical improvement. His immediate goal is to contribute to initiatives where research has a direct, tangible impact on everyday life.
“I want to pursue the option of being part of a spinout process that would enable my research to have a direct impact in everyday life and help solve agricultural issues,” he adds.
In the Marelli Lab at MIT, Giorgio Rizzo develops sustainable seed coatings using natural materials to boost plant resilience and reduce the environmental impact of traditional fertilizers.
Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition. World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient fo
Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition.
World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.
J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.”
Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.
Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I've always been fascinated by trying to solve these issues,” he noted.
His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains.
For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from the data, and so this is where we need more sophisticated and robust algorithms.”
Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.”
The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”
While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.
There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges.
Ali Aouad is an assistant professor of operations management at the MIT Sloan School of Management.
Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition. World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient fo
Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition.
World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.
J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.”
Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.
Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I've always been fascinated by trying to solve these issues,” he noted.
His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains.
For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from the data, and so this is where we need more sophisticated and robust algorithms.”
Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.”
The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”
While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.
There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges.
Ali Aouad is an assistant professor of operations management at the MIT Sloan School of Management.
Health
‘Dry January’ helped drive drinking rates to 96-year low
Alvin Powell
Harvard Staff Writer
October 14, 2025
9 min read
Health experts say rise of sober-curious movement, shifts in tech, meds likely made difference in wake of pandemic excess
Are shifts in drinking culture, wearable technology, and drug research combining to help cut alcohol consumption to levels not seen since the
‘Dry January’ helped drive drinking rates to 96-year low
Alvin Powell
Harvard Staff Writer
9 min read
Health experts say rise of sober-curious movement, shifts in tech, meds likely made difference in wake of pandemic excess
Are shifts in drinking culture, wearable technology, and drug research combining to help cut alcohol consumption to levels not seen since the 1930s?
Harvard health experts think they might be.
A recent survey showed that a record low of 54 percent of U.S. adults say they drink, the lowest seen since the Gallup Poll’s initial 1939 look at the nation’s imbibing habits. The reading is the latest in a multiyear trend, which has seen a decline from 67 percent in 2022 to 62 percent in 2023, 58 percent in 2024, to today’s 54 percent.
The poll also suggests that those who do drink are drinking less, with a record low of 24 percent reporting having had an alcoholic beverage in the last 24 hours.
“The messages are landing,” said Marisa Silveri, an associate professor of psychiatry at Harvard Medical School and director of McLean Hospital’s Neurodevelopmental Laboratory on Addictions and Mental Health. “We’re seeing measurable shifts in drinking behavior and awareness that reflect decades of neuroscience and public health research finally translating into real-world impact.”
During the pandemic, the outlook was less positive. A significant spike in drinking caused alarm among public health officials and researchers, who saw a reversal of hard-won gains and predicted long-term harms and a spike in alcohol-related deaths — due to cancer, liver failure, and other alcohol-related ills — in the years ahead.
Silveri gives the so-called sober-curious movement a lot of credit for ushering in change. The message of re-evaluating one’s relationship with alcohol gained steam after the pandemic, spread widely through social media, and seemed to touch a collective nerve. Breaks from drinking in “Dry January” — New Year’s resolution season — or autumn’s “Sober October” went mainstream.
Silveri, like many Americans, became more aware of how drinking habits changed during the pandemic. Extending a Dry January into 500 days, she saw firsthand the sleep and mood improvements that mirror what neuroscience has long demonstrated about alcohol’s impact on brain health.
The increasing sophistication and popularity of wearable technology also may have played a part, Silveri said. Fitness trackers not only record a person’s steps during the day but also highlight the physiological effects of a night’s drinking, chronicling elevated heart rate, blood pressure, and sleep disruptions — changes that can result in mood, cognitive, and mental health impacts.
Recent scientific findings that no amount of alcohol is healthy arrived amid the shifting social environment on drinking, potentially lending those cautions greater resonance, Silveri said.
Those results also called into question the once-accepted wisdom that light to moderate drinking isn’t harmful and that some drinking — a glass of red wine per day — may actually be healthful.
“Alcohol challenges our physiology, there’s no disputing that,” Silveri said. “And there’s not really any evidence that it changes physiology in a beneficial way.”
Marisa Silveri (left), Joji Suzuki, Guongyi Szabo, and Jagpreet Chhatwal.
That stance echoes the January warning by the U.S. surgeon general that alcohol is the nation’s third-leading preventable cause of cancer, after smoking and obesity.
Drinking, the advisory said, has a causal link to seven different cancers: breast, colorectal, esophagus, liver, mouth, throat, and voice box. Drinking more increases risk, the advisory says, though for three cancers — breast, mouth, and throat — the risk rises with as little as one drink per day.
“The message that’s finally resonating is that is if you have an alcohol use disorder or have problematic alcohol use, any reduction in use is an improvement. It’s not all or nothing,” Silveri said. “For some, abstinence remains essential, but for many people who drink at hazardous levels, even modest reductions can yield measurable benefits — better sleep, improved mood, sharper cognition. These small physiological and psychological gains reinforce each other, supporting longer-term change.”
While the negative health effects of alcohol have gained public prominence recently, some researchers caution against discarding all past evidence of beneficial effects.
There’s no argument that heavy drinking is harmful, but the question — and the evidence — concerning low and moderate drinking is more complex and requires more high quality studies, they say. Increased risk for some cancers may be counterbalanced by lower risk of others, for example.
Moderate drinking seems to decrease the risk of cardiovascular disease versus nondrinkers. Drinking patterns are also important, with one large study showing that moderate drinking with meals appeared to decrease mortality, while consuming the same weekly amount outside of meals and on fewer occasions increases mortality.
Joji Suzuki, associate professor of psychiatry at Harvard Medical School and director of Brigham and Women’s Hospital’s Division of Addiction Psychiatry, said the sober-curious movement has been reinforced by the development of appealing non-alcoholic alternatives, such as mocktails and a new generation of nonalcoholic beer and wine, that reduce stigma and make bars, parties, and nightclubs more welcoming.
While the increased interest in sobriety is important, Suzuki said many people will continue to drink, so it’s also important not to lose the message of moderation.
“If you’re not drinking, don’t start drinking because you think you’re going to get a benefit,” Suzuki said. “But we know that people are going to drink, so keep it moderate. Moderation is a very important message.”
“Moderation is a very important message.”
Joji Suzuki
Another shift in recent years, Suzuki said, has been the startling effectiveness of the latest generation of anti-obesity medications in helping curb drinking.
These Ozempic successors are GLP-1 inhibitors and have been widely prescribed in just a few years. They’ve proven remarkably effective at removing the desire to overeat and have allowed people to drop significant weight.
But they also reduce the thirst for alcohol, Suzuki said, to the extent that trials are being considered to test GLP-1 medication for treatment of alcohol use disorder.
If successful, the drugs would represent a powerful new optionto the current major treatment alternatives, like naltrexone, which have not proven very effective.
“People who are not even trying to cut back on drinking are saying, ‘After starting those medications, I notice that I’m not drinking much — or at all,’” Suzuki said. “There are a number of companies that are now specifically targeting alcohol use disorder with GLP-1s, with the aim of making them an FDA-approved treatment.”
Gyongyi Szabo, expert on liver inflammation, HMS professor of medicine, and chief academic officer at Beth Israel Deaconess Medical Center, sees a generational shift in the data, and in her own home.
The recent Gallup poll found that young adult drinking rates have been falling for a decade and since 2023 dropped from 59 percent to 50 percent.
Szabo’s 30-year-old son recently said he’d become a vegetarian, something she attributes to a greater appreciation among the current generation of young adults of the health and social consequences of choices made in eating and drinking.
“There is a generational shift that may be beneficial,” Szabo said.
Szabo said the excess consumption during the pandemic’s difficult months may have caused “an awakening” among those light or moderate drinkers who became worried about what was happening. The downstream effects of that awakening — and the reaction that caused — may still be becoming apparent.
“The increased alcohol use during the lockdown was really bad for everyone,” Szabo said. “But maybe the news actually travels, and people change their behavior. If this is true, it is very, very good. It’s going to have a multiplier effect from the standpoint of liver disease, because, in combination with alcohol, other conditions like medical obesity and Type 2 diabetes have a synergistic effect on the liver.”
During the pandemic, Silveri said, there were concerns that parents stuck at home might allow or even encourage teenagers to drink with them, reasoning that it was safer for them to do so in a controlled environment.
But the pandemic may have had a beneficial effect as well.
School shutdowns disrupted the high school social scene that commonly involves illicit drinking, and research has shown that the later youth begin drinking, the lower the chances that they’ll become problem drinkers down the road.
Even a year or two delay in the age of initiation of youth drinking can reduce physiological impacts on the adolescent brain, with potentially positive ripple effects.
“For anyone under the age of 21 there are not only no benefits to drinking, but clear harms,” Silveri said. “Alcohol interferes with brain development, amplifies risk-taking and emotional reactivity, and raises vulnerability to addiction. Delaying the onset of first use remains one of the most effective ways to protect against later addiction. I stand by that strongly because it is supported by some of our best epidemiological data.”
Despite the positive trends, experts said alcohol’s negative societal impacts are far from finished.
Jagpreet Chhatwal, associate professor of radiology at Harvard Medical School and director of Massachusetts General Hospital’s Institute for Technology Assessment, said the recent polling provided a broad view but didn’t focus on problem drinking, which doesn’t appear to have subsided.
Liver failure, Chhatwal said, is caused by years of heavy drinking, and any change in those drinking habits won’t be reflected in clinic waiting rooms for some time.
“Binge drinking, high-risk drinking, those things were not covered in the survey, and it will be important that we see those trends also coming down,” said Chhatwal, who leads health policy research in several areas, including alcohol use disorder, liver cancer, and nonalcoholic fatty liver disease. “It’s too soon to say that this small drop in consumption will have an impact because it takes years to see that trend in liver disease.”
There are other pockets of concern as well, Suzuki said.
People with alcohol-related issues are over-represented in the nation’s emergency rooms, while high school drinking remains an issue. Women continue to narrow the gap with men when it comes to problem drinking, with the result that alcoholic liver disease is increasing among women.
“It’s not all good news,” Suzuki said. “There are some troubling trends that are emerging, but overall I think it’s great that there’s a greater interest in sobriety and recognition that heavy drinking can be harmful.”
Manufacturing better batteries, faster electronics, and more effective pharmaceuticals depends on the discovery of new materials and the verification of their quality. Artificial intelligence is helping with the former, with tools that comb through catalogs of materials to quickly tag promising candidates.But once a material is made, verifying its quality still involves scanning it with specialized instruments to validate its performance — an expensive and time-consuming step that can hold up th
Manufacturing better batteries, faster electronics, and more effective pharmaceuticals depends on the discovery of new materials and the verification of their quality. Artificial intelligence is helping with the former, with tools that comb through catalogs of materials to quickly tag promising candidates.
But once a material is made, verifying its quality still involves scanning it with specialized instruments to validate its performance — an expensive and time-consuming step that can hold up the development and distribution of new technologies.
Now, a new AI tool developed by MIT engineers could help clear the quality-control bottleneck, offering a faster and cheaper option for certain materials-driven industries.
In a study appearing today in the journal Matter, the researchers present “SpectroGen,” a generative AI tool that turbocharges scanning capabilities by serving as a virtual spectrometer. The tool takes in “spectra,” or measurements of a material in one scanning modality, such as infrared, and generates what that material’s spectra would look like if it were scanned in an entirely different modality, such as X-ray. The AI-generated spectral results match, with 99 percent accuracy, the results obtained from physically scanning the material with the new instrument.
Certain spectroscopic modalities reveal specific properties in a material: Infrared reveals a material’s molecular groups, while X-ray diffraction visualizes the material’s crystal structures, and Raman scattering illuminates a material’s molecular vibrations. Each of these properties is essential in gauging a material’s quality and typically requires tedious workflows on multiple expensive and distinct instruments to measure.
With SpectroGen, the researchers envision that a diversity of measurements can be made using a single and cheaper physical scope. For instance, a manufacturing line could carry out quality control of materials by scanning them with a single infrared camera. Those infrared spectra could then be fed into SpectroGen to automatically generate the material’s X-ray spectra, without the factory having to house and operate a separate, often more expensive X-ray-scanning laboratory.
The new AI tool generates spectra in less than one minute, a thousand times faster compared to traditional approaches that can take several hours to days to measure and validate.
“We think that you don’t have to do the physical measurements in all the modalities you need, but perhaps just in a single, simple, and cheap modality,” says study lead Loza Tadesse, assistant professor of mechanical engineering at MIT. “Then you can use SpectroGen to generate the rest. And this could improve productivity, efficiency, and quality of manufacturing.”
The study was led by Tadesse, with former MIT postdoc Yanmin Zhu serving as first author.
Beyond bonds
Tadesse’s interdisciplinary group at MIT pioneers technologies that advance human and planetary health, developing innovations for applications ranging from rapid disease diagnostics to sustainable agriculture.
“Diagnosing diseases, and material analysis in general, usually involves scanning samples and collecting spectra in different modalities, with different instruments that are bulky and expensive and that you might not all find in one lab,” Tadesse says. “So, we were brainstorming about how to miniaturize all this equipment and how to streamline the experimental pipeline.”
Zhu noted the increasing use of generative AI tools for discovering new materials and drug candidates, and wondered whether AI could also be harnessed to generate spectral data. In other words, could AI act as a virtual spectrometer?
A spectroscope probes a material’s properties by sending light of a certain wavelength into the material. That light causes molecular bonds in the material to vibrate in ways that scatter the light back out to the scope, where the light is recorded as a pattern of waves, or spectra, that can then be read as a signature of the material’s structure.
For AI to generate spectral data, the conventional approach would involve training an algorithm to recognize connections between physical atoms and features in a material, and the spectra they produce. Given the complexity of molecular structures within just one material, Tadesse says such an approach can quickly become intractable.
“Doing this even for just one material is impossible,” she says. “So, we thought, is there another way to interpret spectra?”
The team found an answer with math. They realized that a spectral pattern, which is a sequence of waveforms, can be represented mathematically. For instance, a spectrum that contains a series of bell curves is known as a “Gaussian” distribution, which is associated with a certain mathematical expression, compared to a series of narrower waves, known as a “Lorentzian” distribution, that is described by a separate, distinct algorithm. And as it turns out, for most materials infrared spectra characteristically contain more Lorentzian waveforms, while Raman spectra are more Gaussian, and X-ray spectra is a mix of the two.
Tadesse and Zhu worked this mathematical interpretation of spectral data into an algorithm that they then incorporated into a generative AI model.
“It’s a physics-savvy generative AI that understands what spectra are,” Tadesse says. “And the key novelty is, we interpreted spectra not as how it comes about from chemicals and bonds, but that it is actually math — curves and graphs, which an AI tool can understand and interpret.”
Data co-pilot
The team demonstrated their SpectroGen AI tool on a large, publicly available dataset of over 6,000 mineral samples. Each sample includes information on the mineral’s properties, such as its elemental composition and crystal structure. Many samples in the dataset also include spectral data in different modalities, such as X-ray, Raman, and infrared. Of these samples, the team fed several hundred to SpectroGen, in a process that trained the AI tool, also known as a neural network, to learn correlations between a mineral’s different spectral modalities. This training enabled SpectroGen to take in spectra of a material in one modality, such as in infrared, and generate what a spectra in a totally different modality, such as X-ray, should look like.
Once they trained the AI tool, the researchers fed SpectroGen spectra from a mineral in the dataset that was not included in the training process. They asked the tool to generate a spectra in a different modality, based on this “new” spectra. The AI-generated spectra, they found, was a close match to the mineral’s real spectra, which was originally recorded by a physical instrument. The researchers carried out similar tests with a number of other minerals and found that the AI tool quickly generated spectra, with 99 percent correlation.
“We can feed spectral data into the network and can get another totally different kind of spectral data, with very high accuracy, in less than a minute,” Zhu says.
The team says that SpectroGen can generate spectra for any type of mineral. In a manufacturing setting, for instance, mineral-based materials that are used to make semiconductors and battery technologies could first be quickly scanned by an infrared laser. The spectra from this infrared scanning could be fed into SpectroGen, which would then generate a spectra in X-ray, which operators or a multiagent AI platform can check to assess the material’s quality.
“I think of it as having an agent or co-pilot, supporting researchers, technicians, pipelines and industry,” Tadesse says. “We plan to customize this for different industries’ needs.”
The team is exploring ways to adapt the AI tool for disease diagnostics, and for agricultural monitoring through an upcoming project funded by Google. Tadesse is also advancing the technology to the field through a new startup and envisions making SpectroGen available for a wide range of sectors, from pharmaceuticals to semiconductors to defense.
More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poise
More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.
Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT's McGovern Institute for Brain Research, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.
“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”
Building new coalitions
Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented, since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.
Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”
Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.
RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.
MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.
These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”
“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions, and to do so at a moment when it’s needed more than ever.”
More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poise
More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.
Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT's McGovern Institute for Brain Research, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.
“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”
Building new coalitions
Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented, since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.
Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”
Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.
RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.
MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.
These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”
“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions, and to do so at a moment when it’s needed more than ever.”
Scientists at MIT and elsewhere have discovered extremely rare remnants of “proto Earth,” which formed about 4.5 billion years ago, before a colossal collision irreversibly altered the primitive planet’s composition and produced the Earth as we know today. Their findings, reported today in the journal Nature Geosciences, will help scientists piece together the primordial starting ingredients that forged the early Earth and the rest of the solar system.Billions of years ago, the early solar syste
Scientists at MIT and elsewhere have discovered extremely rare remnants of “proto Earth,” which formed about 4.5 billion years ago, before a colossal collision irreversibly altered the primitive planet’s composition and produced the Earth as we know today. Their findings, reported today in the journal Nature Geosciences, will help scientists piece together the primordial starting ingredients that forged the early Earth and the rest of the solar system.
Billions of years ago, the early solar system was a swirling disk of gas and dust that eventually clumped and accumulated to form the earliest meteorites, which in turn merged to form the proto Earth and its neighboring planets.
In this earliest phase, Earth was likely rocky and bubbling with lava. Then, less than 100 million years later, a Mars-sized meteorite slammed into the infant planet in a singular “giant impact” event that completely scrambled and melted the planet’s interior, effectively resetting its chemistry. Whatever original material the proto Earth was made from was thought to have been altogether transformed.
But the MIT team’s findings suggest otherwise. The researchers have identified a chemical signature in ancient rocks that is unique from most other materials found in the Earth today. The signature is in the form of a subtle imbalance in potassium isotopes discovered in samples of very old and very deep rocks. The team determined that the potassium imbalance could not have been produced by any previous large impacts or geological processes occurring in the Earth presently.
The most likely explanation for the samples’ chemical composition is that they must be leftover material from the proto Earth that somehow remained unchanged, even as most of the early planet was impacted and transformed.
“This is maybe the first direct evidence that we’ve preserved the proto Earth materials,” says Nicole Nie, the Paul M. Cook Career Development Assistant Professor of Earth and Planetary Sciences at MIT. “We see a piece of the very ancient Earth, even before the giant impact. This is amazing because we would expect this very early signature to be slowly erased through Earth’s evolution.”
The study’s other authors include Da Wang of Chengdu University of Technology in China, Steven Shirey and Richard Carlson of the Carnegie Institution for Science in Washington, Bradley Peters of ETH Zürich in Switzerland, and James Day of Scripps Institution of Oceanography in California.
A curious anomaly
In 2023, Nie and her colleagues analyzed many of the major meteorites that have been collected from sites around the world and carefully studied. Before impacting the Earth, these meteorites likely formed at various times and locations throughout the solar system, and therefore represent the solar system’s changing conditions over time. When the researchers compared the chemical compositions of these meteorite samples to Earth, they identified among them a “potassium isotopic anomaly.”
Isotopes are slightly different versions of an element that have the same number of protons but a different number of neutrons. The element potassium can exist in one of three naturally-occurring isotopes, with mass numbers (protons plus neutrons) of 39, 40, and 41, respectively. Wherever potassium has been found on Earth, it exists in a characteristic combination of isotopes, with potassium-39 and potassium-41 being overwhelmingly dominant. Potassium-40 is present, but at a vanishingly small percentage in comparison.
Nie and her colleagues discovered that the meteorites they studied showed balances of potassium isotopes that were different from most materials on Earth. This potassium anomaly suggested that any material that exhibits a similar anomaly likely predates Earth’s present composition. In other words, any potassium imbalance would be a strong sign of material from the proto Earth, before the giant impact reset the planet’s chemical composition.
“In that work, we found that different meteorites have different potassium isotopic signatures, and that means potassium can be used as a tracer of Earth’s building blocks,” Nie explains.
“Built different”
In the current study, the team looked for signs of potassium anomalies not in meteorites, but within the Earth. Their samples include rocks, in powder form, from Greenland and Canada, where some of the oldest preserved rocks are found. They also analyzed lava deposits collected from Hawaii, where volcanoes have brought up some of the Earth’s earliest, deepest materials from the mantle (the planet’s thickest layer of rock that separates the crust from the core).
“If this potassium signature is preserved, we would want to look for it in deep time and deep Earth,” Nie says.
The team first dissolved the various powder samples in acid, then carefully isolated any potassium from the rest of the sample and used a special mass spectrometer to measure the ratio of each of potassium’s three isotopes. Remarkably, they identified in the samples an isotopic signature that was different from what’s been found in most materials on Earth.
Specifically, they identified a deficit in the potassium-40 isotope. In most materials on Earth, this isotope is already an insignificant fraction compared to potassium’s other two isotopes. But the researchers were able to discern that their samples contained an even smaller percentage of potassium-40. Detecting this tiny deficit is like spotting a single grain of brown sand in a bucket rather than a scoop full of of yellow sand.
The team found that, indeed, the samples exhibited the potassium-40 deficit, showing that the materials “were built different,” says Nie, compared to most of what we see on Earth today.
But could the samples be rare remnants of the proto Earth? To answer this, the researchers assumed that this might be the case. They reasoned that if the proto Earth were originally made from such potassium-40-deficient materials, then most of this material would have undergone chemical changes — from the giant impact and subsequent, smaller meteorite impacts — that ultimately resulted in the materials with more potassium-40 that we see today.
The team used compositional data from every known meteorite and carried out simulations of how the samples’ potassium-40 deficit would change following impacts by these meteorites and by the giant impact. They also simulated geological processes that the Earth experienced over time, such as the heating and mixing of the mantle. In the end, their simulations produced a composition with a slightly higher fraction of potassium-40 compared to the samples from Canada, Greenland, and Hawaii. More importantly, the simulated compositions matched those of most modern-day materials.
The work suggests that materials with a potassium-40 deficit are likely leftover original material from the proto Earth.
Curiously, the samples’ signature isn’t a precise match with any other meteorite in geologists’ collections. While the meteorites in the team’s previous work showed potassium anomalies, they aren’t exactly the deficit seen in the proto Earth samples. This means that whatever meteorites and materials originally formed the proto Earth have yet to be discovered.
“Scientists have been trying to understand Earth’s original chemical composition by combining the compositions of different groups of meteorites,” Nie says. “But our study shows that the current meteorite inventory is not complete, and there is much more to learn about where our planet came from.”
This work was supported, in part, by NASA and MIT.
“This is maybe the first direct evidence that we’ve preserved the proto Earth materials,” says Nicole Nie. An artist’s illustration shows a rocky proto Earth bubbling with lava.
Suria News Online, 11 October 2025 The Sunday Times, 12 October 2025, Singapore, pA14zbSunday, 12 October 2025, News, p4Berita Minggu, 12 October 2025, p3Tamil Murasu, 12 October 2025, Front PageBerita Harian, 21 October 2025, p7
Peter Chen, Professor of Physical-Organic Chemistry, will be delivering a farewell lecture to mark his upcoming retirement. Chen is a man with a remarkable history who has played a significant role in shaping ETH Zurich for over thirty years.
Peter Chen, Professor of Physical-Organic Chemistry, will be delivering a farewell lecture to mark his upcoming retirement. Chen is a man with a remarkable history who has played a significant role in shaping ETH Zurich for over thirty years.
What does “home” mean in Singapore’s 60th year of nationhood? At the annual NUS Chua Thian Poh Community Leadership Centre (CTPCLC) Symposium this year, NUS students, alumni, faculty, guests and community partners, including representatives from the Chua Foundation, came together to explore this question, discovering that “home” is as much about neighbourliness and family as it is about digital connections and inclusive communities.Held on 27 September 2025, the Symposium brought together more t
What does “home” mean in Singapore’s 60th year of nationhood? At the annual NUS Chua Thian Poh Community Leadership Centre (CTPCLC) Symposium this year, NUS students, alumni, faculty, guests and community partners, including representatives from the Chua Foundation, came together to explore this question, discovering that “home” is as much about neighbourliness and family as it is about digital connections and inclusive communities.
Held on 27 September 2025, the Symposium brought together more than 130 attendees, to reflect on the theme: The Idea of ‘Home’ @SG60.
Perspectives on belonging
The event opened with a panel discussion moderated by CTPCLC Director Associate Professor Chng Huang Hoon, featuring Guest-of-Honour Mr David Neo, Acting Minister for Culture, Community and Youth and Senior Minister of State for Education; Ms Chua Weiling, Director of Philanthropy at Chua Foundation and CEO of One Hill Capital Pte Ltd; and CTPCLC alumnus Mr Cheng Tian Wei.
Mr Neo emphasised the importance of balancing individual and collective responsibility in fostering a sense of belonging, noting the symbiotic relationship between the individual and the community. "The 'Me' succeeds because of the 'We', and willingly gives back to the 'We'. Policies and programmes may set the foundation but everyone must work together to build connections, strengthen trust and shape our collective future." he said.
Building on the emphasis on collective responsibility, Ms Chua highlighted that philanthropy should begin at home, noting that individuals can make their impact felt by giving back to Singapore before looking further afield. “If you want to start something, start from home,” she said. Echoing this ground-up ethos, Mr Cheng shared how everyday neighbourliness forms the bedrock of cohesion. He said, “With everyone looking out for their neighbours, and the people around them, we can build a kinder, more compassionate society.”
Together, their perspectives traced a clear vision of “home”, enabled by national structures, nurtured within families and communities, and sustained through compassionate community practice.
From kampung to cloud
Offering a contemporary twist, CTPCLC alumnus and 2023 Outstanding Young Alumni Award recipient, Mr Khoo Yi Feng, delivered a keynote address titled From Kampung to Cloud: Rethinking Home in an Age of Movement and Memory.
Mr Khoo noted that in today’s world, belonging extends beyond physical spaces. “As networked individuals, we fulfil our needs in multiple online communities. This challenges community leaders to design, cultivate and steward virtual spaces. The feeling of ‘home’ doesn’t just happen and is no longer just a physical place.”
Drawing on his experiences with mental health advocacy, he urged students, educators, and partners to co-create spaces—physical and virtual—where people feel safe, seen, supported, and empowered to “make good trouble”.
Diverse pathways to community impact
A highlight of the Symposium was a showcase of four student-led community projects, where NUS students applied classroom learning to tackle real-world social issues.
Identifying the aftercare needs of families in their post-home ownership journey
CTPCLC students Mavis Chin Jun Qi (Arts and Social Sciences ’25) and Freddy Ow Yong Zhi Long (Arts and Social Sciences ’25) presented a study in partnership with the South Central Community Family Service Centre, to examine the aftercare needs of low-income families who completed the Keystart Home Ownership Programme. Using semi-structured interviews and focus group discussions, the team developed a three-phase aftercare model that focuses on strengthening financial literacy, building home maintenance skills, and deepening social connectedness to support families’ stability beyond the point of purchase.
Negotiating memory and modernity: A stakeholder-centric redevelopment of Esplanade Park
A team from NUS Ridge View Residential College (RVRC), represented by Hazel Tio (Science ’26), investigated the urban tension between heritage preservation and modern development through the lens of the development of the Padang and Esplanade Park. Grounded in multi-layered historical analysis and the site’s recent national recognition and potential UNESCO status, the study proposed a stakeholder-driven, collaborative placemaking approach for Esplanade Park. By integrating the site’s history, context, and redevelopment needs with interactive installations and curated cultural experiences, the team outlined a path to revitalise the park, renewing civic identity while harmonising memory and modernity.
Coming home empty: Perspectives of empty nesters who find renewed purpose in art
Partnering with Mama on Palette (MoP), a community supporting mothers’ artistic expression, CTPCLC students Chomel Chan (Arts and Social Sciences ’25), Shannon Foo Shao Wei (Business ’26), Denise Leong En Hui (Arts and Social Sciences ’25), Angie Lim Tze Yii (Arts and Social Sciences ’25) and Isabel Lui Yisha (Arts and Social Sciences ’25) explored the expected and lived experiences of female empty nesters in Singapore. Through semi-structured interviews with eight mothers, the research surfaced themes including spousal relationships, phases of transition, and self-fulfilment. This project highlighted how MoP fosters community and empowerment and pointed to opportunities to deepen support for current and future empty nesters. Recommendations included buddy systems, bonding programmes, and regular collaborative art projects—underscoring the role of artistic engagement in ensuring wellbeing and resilience.
Nourishing rough sleepers in Singapore
NUS College student Keicia Seek En-Ting (Arts and Social Sciences ’27) addressed food insecurity among rough sleepers, as part of her team’s Impact Experience project by working with Homeless Hearts of Singapore and food charities to co-develop a sustainable food distribution system. Guided by a needs assessment conducted directly with rough sleepers, the project aims to provide regular, nutritious meals that alleviate financial pressures and improve physical and mental well-being. A key outcome is a replicable distribution model designed to respond to community-identified needs and enable longer-term support.
Recognising achievement and community spirit
The symposium also celebrated the Class of 2025 graduates who completed the Certificate or Minor Degree in Community Development and Leadership. In her valedictory address, Isabel Lui (Arts and Social Sciences ’25) thanked professors, peers, and families for anchoring the cohort’s journey, and urged her classmates to carry that sense of belonging into every space—serving, collaborating, and building communities where others feel they belong. The event concluded with a networking lunch, where students, alumni, partners, and guests renewed connections and exchanged ideas to make the world a little more like home.
The 2025 symposium marked another key milestone in CTPCLC’s ongoing work to cultivate youth leaders who can bridge sectors, mobilise communities, and co-create solutions. With sustained support from the Chua Foundation and strong partnerships across NUS and the wider community, CTPCLC continues to empower students to play an active role in building an inclusive, resilient society, and contribute meaningfully to Singapore and beyond.
The NUS Chua Thian Poh Community Leadership Centre offers both a Certificate and a Minor Degree in Community Development and Leadership. To date, 370 students have been awarded the Certificate in Community Development and Leadership and 59 of these students also received a Minor Degree in Community Development and Leadership.
University of Cambridge scientists have used human stem cells to create three-dimensional embryo-like structures that replicate certain aspects of very early human development – including the production of blood stem cells.
Human blood stem cells, also known as hematopoietic stem cells, are immature cells that can develop into any type of blood cell, including red blood cells that carry oxygen and various types of white blood cells crucial to the immune system.
The embryo-like structures, whic
University of Cambridge scientists have used human stem cells to create three-dimensional embryo-like structures that replicate certain aspects of very early human development – including the production of blood stem cells.
Human blood stem cells, also known as hematopoietic stem cells, are immature cells that can develop into any type of blood cell, including red blood cells that carry oxygen and various types of white blood cells crucial to the immune system.
The embryo-like structures, which the scientists have named ‘hematoids’, are self-organising and start producing blood after around two weeks of development in the lab - mimicking the development process in human embryos.
The structures differ from real human embryos in many ways, and cannot develop into them because they lack several embryonic tissues, as well as the supporting yolk sac and placenta needed for further development.
Hematoids hold exciting potential for a better understanding of blood formation during early human development, simulating blood disorders like leukaemia, and for producing long-lasting blood stem cells for transplants.
The human stem cells used to derive hematoids can be created from any cell in the body. This means the approach also holds great potential for personalised medicine in the future, by allowing the production of blood that is fully compatible with a patient’s own body.
Although other methods exist for generating human blood stem cells in the laboratory, these require a cocktail of extra proteins to support the stem cells’ growth and development. The new method mimics the natural developmental process, based on a self-organising human embryo-like model, where the cells’ intrinsic support environment drives the formation of blood cells and beating heart cells within the same system.
Dr Jitesh Neupane, a researcher at the University of Cambridge’s Gurdon Institute and joint first author of the study, said: “It was an exciting moment when the blood red colour appeared in the dish – it was visible even to the naked eye.”
He added, “Our new model mimics human foetal blood development in the lab. This sheds light on how blood cells naturally form during human embryogenesis, offering potential medical advances to screen drugs, study early blood and immune development, and model blood disorders like leukaemia.”
Professor Azim Surani at the University of Cambridge’s Gurdon Institute, senior author of the paper, said: “This model offers a powerful new way to study blood development in the early human embryo. Although it is still in the early stages, the ability to produce human blood cells in the lab marks a significant step towards future regenerative therapies - which use a patient’s own cells to repair and regenerate damaged tissues.”
Dr Geraldine Jowett at the University of Cambridge’s Gurdon Institute, co-first author of the study, said: “Hematoids capture the second wave of blood development that can give rise to specialised immune cells or adaptive lymphoid cells, like T cells, opening up exciting avenues for their use in modelling healthy and cancerous blood development.”
Self-organising structures
The new human embryo-like model simulates the cell changes that occur during the very early stages of human development, when our organs and blood system first begin to form.
The team observed the emergence of the three-dimensional hematoids under a microscope in the lab. By the second day, these had self-organised into three germ layers - called the ectoderm, mesoderm, and endoderm - the foundations of the human body plan that are crucial for shaping every organ and tissue, including blood.
By day eight, beating heart cells had formed. These cells eventually give rise to the heart in a developing human embryo.
By day thirteen, the team saw red patches of blood appearing in the hematoids. They also developed a method which demonstrated that blood stem cells in hematoids can differentiate into various blood cell types, including specialised immune cells, such as T-cells.
Shining a light on early human development
Stem cell-derived embryo models are crucial for advancing our knowledge of early human development.
The blood cells in hematoids develop to a stage that roughly corresponds to week four to five of human embryonic development. This very early stage of life cannot be directly observed in a real human embryo because it has implanted in the mother’s womb by this time.
There are clear regulations governing stem cell-based models of human embryos, and all research modelling human embryo development must be approved by ethics committees before proceeding. This study received the necessary approvals, and the resulting paper has been peer reviewed.
The scientists have patented this work through Cambridge Enterprise, the innovation arm of the University of Cambridge, which helps researchers translate their work into a globally leading economic and social impact.
Researchers have found a new way to produce human blood cells in the lab that mimics the process in natural embryos. Their discovery holds potential to simulate blood disorders like leukaemia, and to produce long-lasting blood stem cells for transplants.
It was an exciting moment when the blood red colour appeared in the dish – it was visible even to the naked eye.
The University of Cambridge has submitted its planning application for a revised masterplan for the future phases of the Eddington development, with delivery targeted to begin in 2026.
The outline planning application – a purposeful extension of Eddington’s first phase which began work in 2013 – marks a major step forward in realising the vision for North West Cambridge, and delivering more much-needed homes for the city. The proposals build on years of planning and three rounds of public consu
The University of Cambridge has submitted its planning application for a revised masterplan for the future phases of the Eddington development, with delivery targeted to begin in 2026.
The outline planning application – a purposeful extension of Eddington’s first phase which began work in 2013 – marks a major step forward in realising the vision for North West Cambridge, and delivering more much-needed homes for the city. The proposals build on years of planning and three rounds of public consultation over the past 12 months. Feedback from local communities, residents, and stakeholders has been integral in shaping the vision for the future phases of Eddington.
The masterplan sets out how around 3,800 additional homes will be delivered, alongside new green spaces, community facilities, and active travel routes. Combined with the 1,850 homes already built or under construction in the first phase, Eddington will provide around 5,650 homes in total. Up to 50% of these will be affordable homes for University key workers with the rest on the open market – all of which help address the city’s critical shortage of housing.
Other key features of the submitted masterplan include:
Around 50 hectares of open space, including parks, play areas, and community gardens.
A diverse mix of homes, ranging from townhouses and maisonettes to apartments, designed with varied roofscapes and heights that complement the existing neighbourhood.
Enhanced community facilities, including new sports pitches, growing plots, and spaces for recreation such as running routes and BMX tracks.
Continued prioritisation of active and sustainable travel, building on Eddington’s current record of 79% of trips made by walking, cycling, or public transport.
Commercial and social spaces designed to foster a thriving, inclusive neighbourhood.
The revised masterplan also reflects the University’s commitment to creating an ambitious, enduring, and sustainable community that supports both the academic mission of the University and the wider needs of Cambridge. The first phase of the development has already delivered community hub Storey’s Field Centre, the University of Cambridge Primary School and a central square with shops, restaurants and more.
Matt Johnson, Head of Development for North West Cambridge at the University of Cambridge, said: “This is an important milestone for Eddington. Submitting the masterplan reflects years of engagement with the community, and we’re proud of the balanced and ambitious proposals we have put forward. Eddington is already a place where people live, learn, and connect, and with the future phases it will continue to grow into one of the most sustainable and vibrant neighbourhoods in Cambridge.”
Eddington represents one of the most significant development projects in the region, offering solutions to Cambridge’s acute housing challenges while creating a neighbourhood with global ambitions. By providing high-quality and affordable homes for University staff and postgraduate students, the masterplan will help the University continue to attract and retain world-leading researchers, academics, and innovators. This is vital to sustain Cambridge’s position as a global centre of excellence.
Indeed, a survey conducted by the University found that 89% of all respondents said it was either difficult or impossible to find a suitable home when they moved to Cambridge.
Beyond supporting the University’s mission, the plans will also strengthen the wider Cambridge ecosystem by enabling innovation, investment, and job creation to flourish, while ensuring the city remains a magnet for talent from around the world.
The updated masterplan builds on the original 2013 consent, refreshing and refining the vision to reflect the University’s current needs, community feedback, and the city’s increased demand for housing.
The outline planning application will now be considered by the Joint Development Management Committee which comprises members appointed by the City Council and South Cambridgeshire District Council. We look forward to working towards a positive outcome with local planning authorities and hope to move into delivering the future phases by the end of 2026.
A programme of public information sessions explaining the details of the planning application will be confirmed shortly.
Plans will deliver thousands of new homes, green spaces, and community facilities for Cambridge.
Eddington is already a place where people live, learn, and connect, and with the future phases it will continue to grow into one of the most sustainable and vibrant neighbourhoods in Cambridge.
Matt Johnson, Head of Development for North West Cambridge
For decades, synthetic biologists have been developing gene circuits that can be transferred into cells for applications such as reprogramming a stem cell into a neuron or generating a protein that could help treat a disease such as fragile X syndrome.These gene circuits are typically delivered into cells by carriers such as nonpathogenic viruses. However, it has been difficult to ensure that these cells end up producing the correct amount of the protein encoded by the synthetic gene.To overcome
For decades, synthetic biologists have been developing gene circuits that can be transferred into cells for applications such as reprogramming a stem cell into a neuron or generating a protein that could help treat a disease such as fragile X syndrome.
These gene circuits are typically delivered into cells by carriers such as nonpathogenic viruses. However, it has been difficult to ensure that these cells end up producing the correct amount of the protein encoded by the synthetic gene.
To overcome that obstacle, MIT engineers have designed a new control mechanism that allows them to establish a desired protein level, or set point, for any gene circuit. This approach also allows them to edit the set point after the circuit is delivered.
“This is a really stable and multifunctional tool. The tool is very modular, so there are a lot of transgenes you could control with this system,” says Katie Galloway, an assistant professor in Chemical Engineering at MIT and the senior author of the new study.
Using this strategy, the researchers showed that they could induce cells to generate consistent levels of target proteins. In one application that they demonstrated, they converted mouse embryonic fibroblasts to motor neurons by delivering high levels of a gene that promotes that conversion.
MIT graduate student Sneha Kabaria is the lead author of the paper, which appears today in Nature Biotechnology. Other authors include Yunbeen Bae ’24; MIT graduate students Mary Ehmann, Brittany Lende-Dorn, Emma Peterman, and Kasey Love; Adam Beitz PhD ’25; and former MIT postdoc Deon Ploessl.
Dialing up gene expression
Synthetic gene circuits are engineered to include not only the gene of interest, but also a promoter region. At this site, transcription factors and other regulators can bind, turning on the expression of the synthetic gene.
However, it’s not always possible to get all of the cells in a population to express the desired gene at a uniform level. One reason for that is that some cells may take up just one copy of the circuit, while others receive many more. Additionally, cells have natural variation in how much protein they produce.
That has made reprogramming cells challenging because it’s difficult to ensure that every cell in a population of skin cells, for example, will produce enough of the necessary transcription factors to successfully transition into a new cell identity, such as a neuron or induced pluripotent stem cell.
In the new paper, the researchers devised a way to control gene expression levels by changing the distance between the synthetic gene and its promoter. They found that when there was a longer DNA “spacer” between the promoter region and the gene, the gene would be expressed at a lower level. That extra distance, they showed, makes it less likely that transcription factors bound to the promoter will effectively turn on gene transcription.
Then, to create set points that could be edited, the researchers incorporated sites within the spacer that can be excised by an enzyme called Cre recombinase. As parts of the spacer are cut out, it helps bring the transcription factors closer to the gene of interest, which turns up gene expression.
The researchers showed they could create spacers with multiple excision points, each targeted by different recombinases. This allowed them to create a system called DIAL, that they could use to establish “high,” “med,” “low” and “off” set points for gene expression.
After the DNA segment carrying the gene and its promoter is delivered into cells, recombinases can be added to the cells, allowing the set point to be edited at any time.
The researchers demonstrated their system in mouse and human cells by delivering the gene for different fluorescent proteins and functional genes, and showed that they could get uniform expression across the a population of cells at the target level.
“We achieved uniform and stable control. This is very exciting for us because lack of uniform, stable control has been one of the things that's been limiting our ability to build reliable systems in synthetic biology. When there are too many variables that affect your system, and then you add in normal biological variation, it’s very hard to build stable systems,” Galloway says.
Reprogramming cells
To demonstrate potential applications of the DIAL system, the researchers then used it to deliver different levels of the gene HRasG12V to mouse embryonic fibroblasts. This HRas variant has previously been shown to increase the rate of conversion of fibroblasts to neurons. The MIT team found that in cells that received a higher dose of the gene, a larger percentage of them were able to successfully transform into neurons.
Using this system, researchers now hope to perform more systematic studies of different transcription factors that can induce cells to transition to different cell types. Such studies could reveal how different levels of those factors affect the success rate, and whether changing the transcription factors levels might alter the cell type that is generated.
In ongoing work, the researchers have shown that DIAL can be combined with a system they previously developed, known as ComMAND, that uses a feedforward loop to help prevent cells from overexpressing a therapeutic gene.
Using these systems together, it could be possible to tailor gene therapies to produce specific, consistent protein levels in the target cells of individual patients, the researchers say.
“This is something we’re excited about because both DIAL and ComMAND are highly modular, so you could not only have a well-controlled gene therapy that’s somewhat general for a population, but you could, in theory, tailor it for any given person or any given cell type,” Galloway says.
The research was funded, in part, by the National Institute of General Medical Sciences, the National Science Foundation, and the Institute for Collaborative Biotechnologies.
MIT engineers developed a way to set gene expression levels at off, low, or high. Using skin cells, the researchers delivered a cocktail (labeled with a red fluorescent protein, top row) that boosts the conversion of skin cells into motor neurons. Via promoter editing, they show that higher levels of this cocktail increase the number of motor neurons (green). In the bottom row, the same cells are labeled with a green fluorescent protein that is generated after the cells convert to motor neurons.
Researchers from the Cambridge Interfaith Research Forum and Goldsmiths University of London have issued an urgent call to rethink how faith and belief are understood and mobilised in planning new towns and settlements.
Their report, 'Housing with values: faith and belief perspectives on housing and community planning', presents the findings from a Faith & Belief Policy Collective study, produced in light of the UK Government’s ambitious pledge to build 1.5 million new homes.
The researche
Researchers from the Cambridge Interfaith Research Forum and Goldsmiths University of London have issued an urgent call to rethink how faith and belief are understood and mobilised in planning new towns and settlements.
Their report, 'Housing with values: faith and belief perspectives on housing and community planning', presents the findings from a Faith & Belief Policy Collective study, produced in light of the UK Government’s ambitious pledge to build 1.5 million new homes.
The researchers’ analysis is based on interviews with practitioners and professionals including architects, housing developers, journalists, lawyers, activists, ordained ministers, policy makers and researchers, social historians, and scholars of religion. The report offers guiding principles for inclusive planning and proposes fuller civil–public collaboration to establish and disseminate good practice.
It follows the publication of the New Towns Taskforce (NTT)’s own recommendations to government in September 2025 which advised that plans for social infrastructure should include “faith-based spaces to enrich communities and open up opportunities for personal development” and that faith organisations should be involved in “community engagement strategy”.
The new report’s authors welcome this but warn that current planning systems in Britain have not yet embraced faith and belief communities as full partners in building thriving communities.
Co-author Dr Iona Hine from Cambridge’s Faculty of Divinity, said: “Developers, agencies, and other planning professionals recognise the effort required to form healthy communities and ensure everyone lives well. Our hope is they’re open to thinking about that challenge in dialogue with people of all flavours of faith and belief.”
The report warns that flourishing communities are undermined by a wide range of factors including: short-term developer models that prioritise profit over social infrastructure; tokenistic consultation; segregated housing patterns that entrench inequality and risk alienation; secular bias and low faith literacy among planners and developers; and intergenerational imbalance in new towns.
The report’s key recommendation is for a 'New Towns Faith Taskforce' to be established to advance the conversation about how best to harness the vision, resources, and overall contribution of faith and belief communities to the delivery of New Towns.
Its authors call for the early provision of schools, health centres, cultural, sporting and faith-based facilities; long-term, co-design consultation that builds trust and ownership; and integration with natural landscapes and local heritage, deepening attachment to place, among a range of other practical recommendations.
The report argues that faith and belief communities offer trusted networks, convening power, insider knowledge, volunteer capacity, inter-generational reach, as well as financial and spiritual capital, and cultural contributions.
Dr Hine and her colleagues point to modern international examples such as Singapore’s proactive planning for religious diversity, but also to model communities in Britain such as Bournville and Ebenezer Howard’s Garden City movement (Letchworth, Welwyn Garden City, Wythenshawe, etc), that paved the way, in their design and ethos, for the 32 postwar New Towns which are currently home to 2.8 million people across the UK.
Lead author Christopher Baker, Professor of Religion, Belief and Public Life at Goldsmiths, University of London said: “As we embark on this next chapter of New Town building in England, it is vital to understand the contribution that faith and belief bring to the sustaining of new communities, through their vision, experience, resources and local leadership.”
Dr Hine said: “This is pivotal moment for housing supply and community formation in Britain. Treating faith and belief as partners in planning can accelerate social cohesion from day one, reduce loneliness and social isolation, and provide governance and voluntary capacity that complements statutory services. Ignoring these dimensions risks creating settlements that are physically complete but socially fragile.”
Dr Iona Hine manages the Cambridge Interfaith Programme and cross-sector Knowledge Hub. She is a member of the Faith & Belief Policy Collective and convenor of Cambridge Interfaith Research Forum.
The UK Government’s pledge to build 1.5 million homes can lead to local resilience, social cohesion and wellbeing but only if the planning process embraces faith and belief communities as full partners.
Treating faith and belief as partners in planning can accelerate social cohesion from day one
Some data is so sensitive that it is processed only in specially protected cloud areas. These are designed to ensure that not even a cloud provider can access the data. ETH Zurich researchers have now found a vulnerability that could allow hackers to breach these confidential environments.
Some data is so sensitive that it is processed only in specially protected cloud areas. These are designed to ensure that not even a cloud provider can access the data. ETH Zurich researchers have now found a vulnerability that could allow hackers to breach these confidential environments.
Lead poisoning was once thought to largely be a problem of the past, as the globe gradually weaned itself off leaded gasoline in road vehicles in 2021. But has global lead pollution truly been resolved?A new study led by Dr Chen Mengli, a Research Fellow from the Tropical Marine Science Institute at the National University of Singapore (NUS), in collaboration with researchers from Imperial College London, University of Warwick, University of Oxford, Jadavpur University, University of Michigan, A
Lead poisoning was once thought to largely be a problem of the past, as the globe gradually weaned itself off leaded gasoline in road vehicles in 2021. But has global lead pollution truly been resolved?
A new study led by Dr Chen Mengli, a Research Fellow from the Tropical Marine Science Institute at the National University of Singapore (NUS), in collaboration with researchers from Imperial College London, University of Warwick, University of Oxford, Jadavpur University, University of Michigan, Ann Arbor, Hebrew University of Jerusalem, Massachusetts Institute of Technology, and University of Bristol, showed the answer is not yet: lead exposure remains a pressing public health and economic challenge in the 21st century. The researchers estimated that ongoing childhood lead exposure costs the world more than US$3.4 trillion in lost economic potential each year, with disproportionate impacts on low- and middle-income countries.
Published in the Communications Earth & Environment on 30 September 2025, the findings suggest that without stronger safeguards, the ever-increasing demand for electrification and poorly regulated recycling of lead-containing products could entrench global inequalities and set back decades of progress in children’s health. To avert this, the researchers proposed a four-pronged strategy that policymakers and industries can act on today.
Lessons from history
Lead has been woven into human society for thousands of years, from the plumbing systems of the Roman Empire to the paints, pipes and industrial alloys still in use today. Its widespread use has left a toxic trail. Some of the earliest mass poisonings were linked to contaminated food and drink in Europe centuries ago. But the most recent incident came with the introduction of tetraethyl lead in gasoline in the 1920s, which for decades spewed millions of tonnes of the metal into the atmosphere.
By the 1970s, children across the world carried dangerously high blood lead levels, and the repercussions were severe, causing neurological damage, impaired development and countless premature deaths. The eventual ban on leaded gasoline, completed worldwide only in 2021, is heralded as one of the great public health victories of the modern era. Importantly, it showed that determined, coordinated global action could reduce exposure and save lives.
However, the team noted that the celebration of a “lead-free” world was premature. While blood lead levels fell in many high-income countries, they plateaued or even rose again in parts of Asia, Africa and Latin America. Legacy contamination from soils and infrastructure, coal combustion, numerous lead-laden products such as leaded paint, and informal recycling of lead-acid batteries and e-wastes have all kept exposure alive.
“The perception that the problem was solved has to change. New sources of exposure continue to emerge and the historical emitted lead keeps redistributing through various natural processes,” added Dr Chen, who is also from the Department of Geography, Faculty of Arts and Social Sciences at NUS.
Today’s exposure and economic toll
Lead production today exceeds 16 million tonnes a year, with about 85 per cent going into lead–acid batteries that power vehicles, telecommunications and backup energy systems. Annual production now exceeds the total lead emitted during the entire era of leaded gasoline.
Though these items can be recycled, much of the reprocessing occurs under unsafe conditions, particularly in low- and middle-income countries. Informal recycling sites, often located near homes and schools, expose workers and surrounding communities to hazardous levels of lead. Coal combustion, contaminated soils and the continued sale of lead-laden paints, toys, and even food products, further compound the risks.
The researchers noted from numerous literatures that health consequences are most severe for children. Even at low levels, lead can damage the developing brain, lowering IQ, impairing learning and contributing to behavioural issues. This burden is often carried across one’s lifetime as the effects are irreversible. In particular, the team estimated that childhood exposure today translates into a global economic loss exceeding US$3.4 trillion annually, equivalent to over 2 per cent of the world’s GDP.
Four-pronged approach to curb a resurgence
The team highlighted that recognising the continuing risks is the first step towards preventing another global health crisis. The study outlined four urgent areas for action to safeguard public health and reduce inequality:
Manage the life cycle of lead-containing products. With demand for batteries and electronics rising, stronger oversight is needed to minimise leakage during production, use and disposal.
Eliminate unsafe and illicit sources. Informal recycling and lead-laden goods such as lead paints, glazed ceramics and adulterated spices continue to expose millions to hazardous levels of lead.
Strengthen monitoring and community involvement. Early detection of lead leakage is often underfunded. Advances in low-cost sensors and machine-learning-based tools, combined with local knowledge, can help identify and address hotspots more effectively.
Capture the full socio-economic cost. Lead exposure disproportionately harms disadvantaged populations. Better models and population-level data are needed to quantify long-term impacts on health, education and productivity, as well as guiding equitable policy responses.
“The world rightly celebrated the phase-out of leaded gasoline as a triumph of international cooperation,” she said. “But the problem of lead exposure has not yet gone away. Unless we remain vigilant about both new sources of exposure and the legacy of lead in the environment, we may risk repeating the same tragedy,” Dr Chen emphasised.
Nation & World
Time for mandatory retirement ages for lawmakers, judges, presidents?
Francis Shen, Benjamin Silverman (on screen), and Nancy Gertner.Niles Singer/Harvard Staff Photographer
Liz Mineo
Harvard Staff Writer
October 10, 2025
5 min read
Americans seem to mostly say yes; legal, medical scholars point to complexities of setting limits
Many professions come with mandatory ret
Time for mandatory retirement ages for lawmakers, judges, presidents?
Francis Shen, Benjamin Silverman (on screen), and Nancy Gertner.
Niles Singer/Harvard Staff Photographer
Liz Mineo
Harvard Staff Writer
5 min read
Americans seem to mostly say yes; legal, medical scholars point to complexities of setting limits
Many professions come with mandatory retirement ages but not so for federal judges and lawmakers, with many remaining on the job well into their 70s and 80s.
That could be ripe for a change as concerns increase over cognitive decline among aging leaders and jurists, said experts during a Wednesday panel titled “How Old is Too Old to Govern?”
“There may well be, particularly now, a movement to have age limits or term limits for judges,” said retired federal judge Nancy Gertner, senior lecturer on Law at Harvard Law School, at the event sponsored by the Petrie-Flom Center. “They exist everywhere else in the world and in the majority of states. The Supreme Court’s lack of either an age limit or a term limit is really unusual.”
Questions about the graying of the nation’s leaders became a major campaign issue in recent elections, most notably in the races for the nation’s commander in chief. Former president Joe Biden was 82 at the end of his presidency, and Donald Trump, at 78, became the oldest person to be inaugurated as president for his second term.
The issue is widespread.
Both Republicans in the Senate and Democrats in the House were led until recently by octogenarians; Republican Senator Mitch O’Connell announced his retirement on his 83rd birthday, and Democratic Congresswoman Nancy Pelosi will be 86 at the end of her term in 2027. The average age of a member of Congress is about 59.
On the Supreme Court, Justices Clarence Thomas (77) and Samuel Alito (75) are the most senior on the bench, followed by Sonia Sotomayor (71) and Chief Justice John Roberts (70).
According to the Federal Judicial Center, in 2024, the average age of U.S. federal judges was 67.68 years.
Most Americans support age limits for both politicians and Supreme Court justices, according to a report from the Pew Research Center, but that would require a constitutional amendment.
The U.S. Constitution sets 35 as the minimum age for president, 30 for senators, and 25 for representatives, but it does not set a maximum age limit. The document specifies neither minimum nor maximum age for Supreme Court justices.
During his remarks, Francis X. Shen, professor of law at the University of Minnesota, and member of Harvard Medical School Center for Bioethics, pointed to a New York Times article that reported that more than a fifth of members of Congress are 70 years old and older.
“There are more people in Congress who are older than ever before,” said Shen, who moderated the event.
In the case of aging judges, some states have tackled the issue already. Thirty-two of 50 impose a mandatory retirement age, according to an article by the National Center for State Courts.
“The upside of that is that it’s administratively very easy. All you need is a birth certificate and a calculator,” Shen said. “The second upside is you reduce, though not entirely, some concerns about cognitive decline in older ages.”
Worldwide, most countries have either a compulsory retirement age for justices in their highest court — which ranges from 60 to 75 years — or term limits.
To address the issue of aging politicians, Shen discussed the possibility of a mandatory disclosure of cognitive assessments, similar to financial disclosures, to provide voters with additional information.
Benjamin C. Silverman, assistant professor of psychiatry and member of the Center for Bioethics at Harvard Medical School, highlighted the difficulties in assessing cognitive impairment, including the vast individual variation in cognitive decline, the variability in cognitive reserve among individuals, and the lack of a baseline neurocognitive functioning assessment.
“The biggest challenge to assessing cognitive impairment is lacking a baseline assessment,” said Silverman. “As we get older, if we display some sort of cognitive challenges, someone might say, ‘Let’s do some neuropsychological testing,’ but without the ability to compare that to something, without being able to see a trajectory, it’s really hard to know what to do with it.”
Gertner retired at 65 in 2011 to pursue other career options, including teaching and writing. She says imposing a retirement age on judges would ultimately be more effective, although she echoed the notion of the complications in setting one.
Individualized cognitive assessments might pose risks in implementation due to potential bias, but also because there isn’t agreement on how to assess cognitive impairment in judges, she said.
“If we don’t have an agreement on what comprises cognitive decline, I’m not sure that I feel comfortable about a cognitive test,” said Gertner. “What is the marker of individualized decline in our incredibly divided world, where judges are under attack?”
Gertner believes that mandatory retirement age for judges, including Supreme Court justices, would help avoid public debates about cognitive decline and also help the court regain some public support, which has dropped to “near historic lows,” according to a recent Pew report.
“I stand for retirement age, particularly for the Supreme Court justices,” said Gertner. “There is another generation coming down the pipe … and the retirement age should address the issue of cognition, but also the issue of democratic legitimacy.”
Science & Tech
Harsh past might bare its teeth
Getty Images
Kermit Pattison
Harvard Staff Writer
October 10, 2025
4 min read
Early adversity leads to higher aggression, fearfulness in adult canines, study says
Mistreating a dog may come back to bite you.
Scientists have long known that childhood abuse, neglect, and trauma can have lifelong consequences in humans. Now, a study by
Early adversity leads to higher aggression, fearfulness in adult canines, study says
Mistreating a dog may come back to bite you.
Scientists have long known that childhood abuse, neglect, and trauma can have lifelong consequences in humans. Now, a study by Harvard scientists links early adversity to similar effects in our oldest domesticated species.
In a study of nearly 4,500 dogs published in Scientific Reports, researchers found that adverse experiences in the first six months of puppyhood were strongly associated with elevated aggression and fearfulness in adult dogs.
“In the general population of dogs, you see a significant impact of life experience on behavior,” said Julia Espinosa, lead author of the new study and a research associate in the Department of Human Evolutionary Biology (HEB). “What we found that was really surprising is that this impact varies by the breed of the dog, so that suggests there’s an important heritable component to behavior and individual susceptibility to stress.”
“This lines up with what we’ve seen in humans and in other animals — there’s this critical period of development when the nervous system is more sensitive,” said study co-author Erin Hecht.
Harvard file photo
Numerous studies have established that early adversity has lifelong effects on humans as well as other animals, including mice. But no comprehensive studies had been performed on dogs until now. The research was conducted in the lab of Erin Hecht, an assistant professor in Human Evolutionary Biology and a prominent researcher of canine biology, evolution, and domestication.
Espinosa collected data on 4,497 dogs by having their owners fill out a survey that covered whether the animals had been subjected to harsh punishments such as beatings, having their mouths held shut, or being pinned down by humans seeking to assert dominance (the so-called “alpha roll”). The survey also asked whether the dogs had gone through traumatic events such as living on the streets, being attacked by other dogs, or getting hit by cars.
“We know that the nervous system is especially plastic early in life,” said Hecht. “In this study, we found that in dogs, traumatic experiences during the first six months had the biggest impact on their fear and aggression behavior later in life.
“This lines up with what we’ve seen in humans and in other animals — there’s this critical period of development when the nervous system is more sensitive and impacts during that time can have bigger effects.”
As dog owners can attest, different breeds exhibit stark differences in behavior and temperament. Researchers uncovered wide variability in baseline levels of fear and aggression among different breeds. For example, breeds that specialized in guarding livestock or bringing down big game were more prone to aggression.
The impacts were most dramatic in breeds such as American Eskimo Dogs, American Leopard Hounds, and Siberian Huskies. Labradors showed relatively little effects.
Within each breed, researchers reported that puppyhood trauma had measurable effects: Animals with histories of adversity displayed greater fear and aggression than other members of the same breeds. These experiences were at least as influential as other factors such as sex and whether the animal had been neutered.
The impacts were most dramatic in breeds such as American Eskimo Dogs, American Leopard Hounds, and Siberian Huskies. On other hand, Labradors showed relatively little effects.
More than half the dogs in the survey came from single breeds. About 48 percent were mutts from mixed or unknown ancestry.
About one-third of the animals were reported to have suffered some form of adversity. But Hecht cautioned that those numbers were probably unusually high in this study population.
“We specifically recruited dogs that had trauma histories,” she said. “So I don’t think this necessarily means that a third of the dogs out there in the world have been neglected or abused.”
The researchers heard heartbreaking stories. One Golden Retriever puppy was fed only a few tablespoons of food every day and by the time he was rescued at age 6 months he weighed only 20 pounds. Although his body recovered, he remained unusually fearful.
The lesson: Our best friends carry early trauma for the rest of their lives.
“Maybe this makes them a little bit more like us than we realized,” said Hecht.
Arts & Culture
Brief bursts of wisdom
Aphorism lover and historian James Geary reflects on how ancient literary art form fits into age of social media
Liz Mineo
Harvard Staff Writer
October 10, 2025
5 min read
James Geary. Stephanie Mitchell/Harvard Staff Photographer
Since James Geary, adjunct lecturer in public policy at Harvard Kennedy School, encountered his first aphorism at age
Aphorism lover and historian James Geary reflects on how ancient literary art form fits into age of social media
Liz Mineo
Harvard Staff Writer
5 min read
James Geary.
Stephanie Mitchell/Harvard Staff Photographer
Since James Geary, adjunct lecturer in public policy at Harvard Kennedy School, encountered his first aphorism at age 8, his love for them has only grown. So much so that in 2005, he published a bestselling book, “The World in a Phrase: A Brief History of the Aphorism.” The book’s second edition comes out this month.
In an interview, which has been edited for clarity and length, Geary spoke to the Gazette about the appeal of those short, philosophical phrases, how they differ from slogans or tweets, and why memes can be the new aphorisms.
What’s the appeal of aphorisms?
Aphorisms are the oldest written art form on the planet, but they’re also the most contemporary. With the rise of social media and short-form communication, in many ways the aphorism has found its perfect technological platform. So much of social media today is just toxic — hot takes, rage posts, and all that kind of stuff — but aphorisms from their beginning, 5,000 years ago in China and Egypt, were mostly philosophical thoughts. They’re often witty and are a very sophisticated form of literature that, unlike so much social media today, is not intended to confirm the opinions you already have, but to challenge and provoke you to think further and deeper.
How do aphorisms differ from proverbs, slogans, or tweets?
A key component of an aphorism is that it has to be philosophical; it has to make you think. And I don’t mean that it has to be esoteric or impenetrable, but about the ultimate questions in life. Aphorisms help us to examine our own beliefs, practices, and our own biases. They’re kind of a philosophy for daily life. Unlike political or commercial slogans or tweets, aphorisms provide answers to that old philosophical question of how to live a good life.
Aphorisms have to be super accessible; you can understand them in a second. And they often feature a twist that upends expectations. Mae West, a famous American actress from the 1940s, said, “It’s not the men in my life that count; it’s the life in my men.” Or JFK’s “Ask not what your country can do for you, but what you can do for your country.” Or French writer Nicolas Chamfort’s “Society is composed of two great classes: those who have more appetite than dinners, and those who have more dinners than appetite.” Their mode of delivery is brief, but the impact of a really good aphorism is long-lasting; they are in your head for a lifetime. I first encountered the aphorism “The only difference between a rut and a grave is the depth” when I was 8 years old, and it has never left my mind.
“Aphorisms have to be super accessible; you can understand them in a second. And they often feature a twist that upends expectations.”
You say in your book that memes are the new form of aphorism. How so?
Since memes appeared on the scene, I realized that aphorisms don’t have to involve language.
Aphorisms can work with visual or textual signs, or it can be a combination. Clet Abraham, for example, uses no words in his visual aphorisms; he takes street signs and twists them to bring out a philosophical meaning. Shilpa Gupta uses text, but she puts the text into the environment so it feels like you’re walking past an aphorism. Xu Bing, the Chinese artist, uses language but kind of distorts it, playing with the ways in which we perceive images and the way we understand language.
Memes are the next step in the evolution of the aphorism. But I wouldn’t say every meme is an aphorism, just like every tweet is not an aphorism. Even if it’s a meme or a visual textual combination, it should still have a twist, it should still be philosophical. The vast majority of memes or tweets are not aphorisms, but the aphorism is adapting to a newly accessible form of communication, which is visual, not only textual.
What’s the common thread among aphorists across eras? What are they preoccupied with?
Politics is a very common thread in many aphorisms from ancient times until today. An ancient Egyptian ruler passed his wisdom to his child who was going to succeed him by saying, “To rule is to know how to be ruled.” And then you have Stanisław Jerzy Lec, a Polish dissident who lived under Soviet rule, who wrote, “Politics: A Trojan horse race.” Daily life is a big theme along with love, friendship, relationships, and money. Mark Twain said, “The lack of money is the root of all evil.”
Austrian writer Marie von Ebner-Eschenbach said, “An intelligent woman has millions of born enemies … all the stupid men.” Polish writer Urszula Zybura said, “If the future had known what lay ahead, it would have never come,” which sums up the political history of Central Europe under Soviet rule. American thinkers such as Twain, Benjamin Franklin, Ralph Waldo Emerson, and Henry David Thoreau are concerned with individualism. Thoreau said, “Let him step to the music which he hears, however measured or far away.”
Do you have any aphorisms of your own?
Yes, I do. Usually, they come out of the blue, or when I’m writing something else, and an aphorism pops up in my mind. Here are a couple: “Even your disguise reveals you.” “If your expectations are low, you are certain to meet them.” This one came out of my classes at the Kennedy School: “Good advice for writing is good advice for living.”
Hundreds of thousands of chemicals are manufactured by the chemical industry, which transforms raw materials – usually fossil fuels – into useful end products. Due to its size and its use of fossil fuel feedstocks, the chemical industry is responsible for roughly 6% of global carbon emissions.
But researchers, led by the University of Cambridge, are developing new methods that could one day lead to the ‘de-fossilisation’ of this important sector.
They have developed a hybrid device that combin
Hundreds of thousands of chemicals are manufactured by the chemical industry, which transforms raw materials – usually fossil fuels – into useful end products. Due to its size and its use of fossil fuel feedstocks, the chemical industry is responsible for roughly 6% of global carbon emissions.
But researchers, led by the University of Cambridge, are developing new methods that could one day lead to the ‘de-fossilisation’ of this important sector.
They have developed a hybrid device that combines light-harvesting organic polymers with bacterial enzymes to convert sunlight, water and carbon dioxide into formate, a fuel that can drive further chemical transformations.
Their ‘semi-artificial leaf’ mimics photosynthesis: the process plants use to convert sunlight into energy, and does not require any external power source. Unlike earlier prototypes, which often relied on toxic or unstable light absorbers, the new biohybrid design avoids toxic semiconductors, lasts longer, and can run without additional chemicals that previously hindered efficiency.
In tests, the researchers used sunlight to convert carbon dioxide into formate and then used it directly in a ‘domino’ chemical reaction to produce an important type of compound used in pharmaceuticals, with high yield and purity.
Their results, reported in the journal Joule, mark the first time that organic semiconductors have been used as the light-harvesting component in this type of biohybrid device, opening the door to a new family of sustainable artificial leaves.
The chemical industry is central to the world economy, producing products from pharmaceuticals and fertilisers, to plastics, paints, electronics, cleaning products, and toiletries.
“If we’re going to build a circular, sustainable economy, the chemical industry is a big, complex problem that we must address,” said Professor Erwin Reisner from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research. “We’ve got to come up with ways to de-fossilise this important sector, which produces so many important products we all need. It’s a huge opportunity if we can get it right.”
Reisner’s research group specialises in the development of artificial leaves, which turn sunlight into carbon-based fuels and chemicals without relying on fossil fuels. But many of their earlier designs depend on synthetic catalysts or inorganic semiconductors, which either degrade quickly, waste much of the solar spectrum, or contain toxic elements such as lead.
“If we can remove the toxic components and start using organic elements, we end up with a clean chemical reaction and a single end product, without any unwanted side reactions,” said co-first author Dr Celine Yeung, who completed the research as part of her PhD work in Reisner’s lab. “This device combines the best of both worlds – organic semiconductors are tuneable and non-toxic, while biocatalysts are highly selective and efficient.”
The new device integrates organic semiconductors with enzymes from sulphate-reducing bacteria, splitting water into hydrogen and oxygen or converting carbon dioxide into formate.
The researchers have also addressed a long-standing challenge: most systems require chemical additives, known as buffers, to keep the enzymes running. These can break down quickly and limit stability. By embedding a helper enzyme, carbonic anhydrase, into a porous titania structure, the researchers enabled the system to work in a simple bicarbonate solution — similar to sparkling water — without unsustainable additives.
“It’s like a big puzzle,” said co-first author Dr Yongpeng Liu, a postdoctoral researcher in Reisner’s lab. “We have all these different components that we’ve been trying to bring together for a single purpose. It took us a long time to figure out how this specific enzyme is immobilised on an electrode, but we’re now starting to see the fruits from these efforts.”
“By really studying how the enzyme works, we were able to precisely design the materials that make up the different layers of our sandwich-like device,” said Yeung. “This design made the parts work together more effectively, from the tiny nanoscale up to the full artificial leaf.”
Tests showed the artificial leaf produced high currents and achieved near-perfect efficiency in directing electrons into fuel-making reactions. The device successfully ran for over 24 hours: more than twice as long as previous designs.
The researchers are hoping to further develop their designs to extend the lifespan of the device and adapt it so it can produce different types of chemical products.
“We’ve shown it’s possible to create solar-powered devices that are not only efficient and durable but also free from toxic or unsustainable components,” said Reisner. “This could be a fundamental platform for producing green fuels and chemicals in future – it’s a real opportunity to do some exciting and important chemistry.”
The research was supported in part by the Singapore Agency for Science, Technology and Research (A*STAR), the European Research Council, the Swiss National Science Foundation, the Royal Academy of Engineering, and UK Research and Innovation (UKRI). Erwin Reisner is a Fellow of St John’s College, Cambridge. Celine Yeung is a Member of Downing College, Cambridge.
Researchers have demonstrated a new and sustainable way to make the chemicals that are the basis of thousands of products – from plastics to cosmetics – we use every day.
The discovery, reported today in Neuron, is a significant step towards understanding the complex mechanisms that drive the disease and provides a promising new avenue for research into more effective therapies for this debilitating condition.
MS is a chronic disease in which the immune system mistakenly attacks the brain and spinal cord, disrupting communication between the brain and the body. While many individuals initially experience relapses and remissions, a significant proportion transiti
The discovery, reported today in Neuron, is a significant step towards understanding the complex mechanisms that drive the disease and provides a promising new avenue for research into more effective therapies for this debilitating condition.
MS is a chronic disease in which the immune system mistakenly attacks the brain and spinal cord, disrupting communication between the brain and the body. While many individuals initially experience relapses and remissions, a significant proportion transition to progressive MS, a phase marked by a steady decline in neurological function with limited treatment options.
To model what is happening in the disease, researchers at the University of Cambridge, UK, and National Institute on Aging, US, took skin cells from patients with progressive MS and reprogrammed them into induced neural stem cells (iNSCs), an immature type of cell capable of dividing and differentiating into various types of brain cells.
Using this ‘disease in a dish’ approach, the team observed that a subset of the cultured brain cells was somehow reverting to an earlier developmental stage, transforming into an unusual cell type known as radial glia-like (RG-like) cells. Notably, these cells were highly specific and appeared approximately six times more frequently in iNSC lines derived from individuals with progressive MS compared to controls. As a result, they were designated as disease-associated RG-like cells (DARGs).
These DARGs exhibit characteristic features of radial glia—specialized cells that serve as scaffolding during brain development and possess the capacity to differentiate into various neural cell types. Essentially, they function both as structural support and as fundamental building blocks, making them critical for proper brain development. Unexpectedly, DARGs not only revert to an ‘infant’ state but also display hallmark features of premature aging, or senescence.
These newly identified DARGs possess a distinctive epigenetic profile—patterns of chemical modifications that regulate gene activity—although the factors influencing this epigenetic landscape remain unclear. These modifications contribute to an exaggerated response to interferons, the immune system’s ‘alarm signals,’ which may help explain the high levels of inflammation observed in MS.
Professor Stefano Pluchino from the Department of Clinical Neurosciences at the University of Cambridge, joint senior author, said: “Progressive MS is a truly devastating condition, and effective treatments remain elusive. Our research has revealed a previously unappreciated cellular mechanism that appears central to the chronic inflammation and neurodegeneration driving the progressive phase of the disease.
“Essentially, what we’ve discovered are glial cells that don’t just malfunction – they actively spread damage. They release inflammatory signals that push nearby brain cells to age prematurely, fuelling a toxic environment that accelerates neurodegeneration.”
The team validated their findings by cross-referencing with human data from individuals with progressive MS. By analysing gene expression patterns at the single-cell level—including new data exploring the spatial context of RNA within post-mortem MS brain tissue—they confirmed that DARGs are specifically localised within chronically active lesions, the regions of the brain that sustain the most significant damage. Importantly, DARGs were found near inflammatory immune cells, supporting their role in orchestrating the damaging inflammatory environment characteristic of progressive MS.
By isolating and studying these disease-driving cells in vitro, the researchers aim to explore their complex interactions with other brain cell types, such as neurons and immune cells. This approach will help to explain the cellular crosstalk that contributes to disease progression in progressive MS, providing deeper insights into underlying pathogenic mechanisms.
Dr Alexandra Nicaise, co-lead author of the study from the Department of Clinical Neurosciences at Cambridge, added: “We’re now working to explore the molecular machinery behind DARGs, and test potential treatments. Our goal is to develop therapies that either correct DARG dysfunction or eliminate them entirely.
“If we’re successful, this could lead to the first truly disease-modifying therapies for progressive MS, offering hope to thousands living with this debilitating condition.”
To date, DARGs have only ever been seen in a handful of diseases, such as glioblastoma and cerebral cavernomas, clusters of abnormal blood vessels. However, this may be because scientists have until now lacked the tools to find them. Professor Pluchino and colleagues believe their approach is likely to reveal that DARGs play an important role in other forms of neurodegeneration.
This work received funding from the Medical Research Council, the Wellcome Trust, the National MS Society, FISM - Fondazione Italiana Sclerosi Multipla, the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS), the National Institute on Aging, the UK Dementia Research Institute, the Austrian Science Fund FWF, the UK MS Society Centre of Excellence, the Bascule Charitable Trust, and the Ferblanc Foundation, with support from the National Institute for Health and Care Research Cambridge Biomedical Research Centre.
Scientists have identified an unusual type of brain cell that may play a vital role in progressive multiple sclerosis (MS), likely contributing to the persistent inflammation characteristic of the disease.
Progressive MS is a truly devastating condition, and effective treatments remain elusive
Four faculty members from the Department of Biomedical Engineering at the College of Design and Engineering in the National University of Singapore have been conferred prestigious awards by the International Federation for Medical and Biological Engineering (IFMBE) for their exemplary contributions to medical and biological engineering. The awards were presented on 29 September 2025 during the World Congress on Medical Physics and Biomedical Engineering 2025, which was held in Adelaide, Australi
Four faculty members from the Department of Biomedical Engineering at the College of Design and Engineering in the National University of Singapore have been conferred prestigious awards by the International Federation for Medical and Biological Engineering (IFMBE) for their exemplary contributions to medical and biological engineering. The awards were presented on 29 September 2025 during the World Congress on Medical Physics and Biomedical Engineering 2025, which was held in Adelaide, Australia. IFMBE is one of the world’s largest federation of national and transnational societies in biomedical engineering, and it also a Non-Governmental Organisation (NGO) for the United Nations and the World Health Organization (WHO).
Emeritus Professor James Goh (left) received the IFMBE Honorary Life Member Award, in recognition of his distinguished leadership and long-standing contributions to the global biomedical engineering community.
Professor Lim Chwee Teck was presented with the IFMBE Otto Schmitt Award in recognition of his pioneering work in mechanobiology, microfluidics, and wearable technologies, as well as his leadership, innovation, and impact in advancing the field. Prof Lim is also NUSS Chair Professor, Director of the Institute for Health Innovation & Technology (iHealthtech), and Principal Investigator at Mechanobiology Institute.
Professor Li Jun received the IFMBE Vladimir K. Zworykin Award for his outstanding research on supramolecular self-assembled nanomaterials and hydrogels, which have advanced the fields of nanomedicine and sustainable agriculture.
Assistant Professor Andy Tay was honoured with the IFMBE–IAMBE Early Career Award, which recognises promising young researchers within seven years of completing their PhD. Asst Prof Tay is also a Principal Investigator at iHealthtech.
For seven decades, the Department of Civil and Environmental Engineering (CEE) in the College of Design and Engineering at the National University of Singapore has played a pivotal role in Singapore’s transformation and contributed to advancements in global engineering. To commemorate this significant milestone, CEE organised the 70th Anniversary Symposium 2025 on 10 October 2025, in partnership with the Professional Engineers Board (PEB) and The Institution of Engineers, Singapore (IES).Held at
Held at the Marina Bay Sands Expo & Convention Centre, the event brought together partners from the government and industry sectors, as well as students from primary and secondary schools, pre-university institutions and polytechnics.
Comprising three components – the symposium, an exhibition and a student fair – the event provided a unique platform where academia, government, and industry came together to share perspectives and innovative solutions, explore collaborative approaches and excite young talents who will shape the future of the built environment sector.
Guest-of-Honour Ms Indranee Rajah, Minister in the Prime Minister’s Office and Second Minister for Finance and National Development, said, “As we face challenges like climate change and resource scarcity, the role of civil and environmental engineers has never been more critical – or more exciting.” Highlighting that built environment professionals “shape how we live, work, and play” and their work “creates the infrastructure that connects communities, and the solutions that protect our environment”, Ms Rajah discussed the various efforts to uplift the built environment sector.
Professor Teo Kie Leong, Dean of the College of Design and Engineering at NUS, expressed his appreciation to CEE’s partners for their unstinting support. “Together, we have nurtured talent, advanced innovation, and shaped Singapore’s built environment into one we can all take pride in,” he said.
“Singapore has been our living laboratory and our shared mission – to engineer a safe, sustainable, and resilient home for generations to come. Today’s symposium reflects that mission and our direction forward,” said Professor Richard Liew, Head of CEE. He added that CEE remained committed to three key thrusts, namely, education, research, and engagement, over the next decade.
A resilient, sustainable and innovative Singapore
The symposium featured speakers from key agencies in Singapore's built environment sector — namely, Building and Construction Authority (BCA), Housing & Development Board (HDB), PUB, Home Team Science and Technology Agency, Land Transport Authority, Urban Redevelopment Agency, and JTC Corporation — who shared important milestones achieved, addressed the current challenges facing the industry, and outlined strategies to build a more resilient and sustainable future for Singapore.
The concluding session of the symposium featured academic and industry leaders discussing the innovative strategies and collaborative approaches necessary for the sector to advance and succeed in an increasingly complex and evolving environment.
As part of his symposium lecture, Associate Professor Raymond Ong from CEE outlined the research directions that will help tackle current and future challenges. His focus was on sustainability and green technologies, robotics and automation, as well as intelligent sensing and autonomy within the built environment. Additionally, he emphasised the importance of developing industry capabilities in the context of an ageing infrastructure and an ageing society.
Innovation comes alive
An interesting highlight of the event was an exhibition featuring key innovations that shaped Singapore today and will drive the nation’s progress into the future. Among the exhibits were:
Green cement and 3D printing (CEE)
This technology pushes the limit of recycling percentage in cement, relying solely on local waste. This innovation not only reduces the carbon footprint of the built environment and wastes destined for Semakau but also reflects CEE’s strong commitment to advancing green and sustainable technologies.
Periodic inspection and defect detection of coastal infrastructure using autonomous underwater vehicles (CEE, BeeX and Delta Marine Consultants)
A groundbreaking approach to enhance the monitoring and protection of Singapore’s coastal infrastructure is the deployment of advanced Hovering Autonomous Underwater Vehicles (HAUVs) to conduct precise and thorough inspections of submerged structures, reducing reliance on human divers and lowering inspection costs. This innovative solution aims to set a new standard for coastal protection, ensuring the long-term safety and resilience of Singapore's shores.
Centre for Resource Circularity and Resilience (CEE)
A series of low-carbon cement, carbon-mineralised aggregates, and aggregates developed from 100 per cent local wastes could be used to address the waste and resource challenges of Singapore. These innovations offer sustainable pathways to upcycle large volumes of excavated marine clay from upcoming megaprojects, turning what was once waste into high-value construction resources.
First on-site 3D printed concrete building (CEE and Woh Hup)
The Norwood Grand Project, located at Champions Way, is Woh Hup’s first on-site 3D concrete printing project in Singapore approved by BCA. CEE researchers contributed their expertise in materials engineering and structural performance, ensuring adherence to local standards while integrating cutting-edge technology.
Inspiring future engineering talents through fun and play
Close to 300 students from around 30 primary and secondary schools, pre-university institutions and polytechnics participated in the student fair, which was designed to immerse students in the world of engineering. These students had the opportunity to engage in a variety of hands-on activities, dynamic interactive demonstrations and connect with engineers as well as young professionals.
These experiences were tailored to spark curiosity and enthusiasm, encouraging students to explore the principles of engineering in action while discovering how these concepts apply to real-world situations, from desalination to tunnelling to protecting Singapore’s shores.
Scientists from the Singapore Centre for Environmental Life Sciences Engineering (SCELSE) – a biofilm & microbiome research centre – and the National University of Singapore (NUS), have uncovered a surprising strategy plants use to thrive when an essential nutrient — sulphur — is in short supply.The team discovered that when soil microbes compete with each other in the rhizosphere (the soil surrounding plant roots), they release a well-known compound called glutathione. This compound enhance
Scientists from the Singapore Centre for Environmental Life Sciences Engineering (SCELSE) – a biofilm & microbiome research centre – and the National University of Singapore (NUS), have uncovered a surprising strategy plants use to thrive when an essential nutrient — sulphur — is in short supply.
The team discovered that when soil microbes compete with each other in the rhizosphere (the soil surrounding plant roots), they release a well-known compound called glutathione. This compound enhances plant growth under sulphur-deficient conditions. The catch: while plants benefit, some microbes lose out in their own growth.
The researchers call this balancing act a “trans-kingdom fitness trade-off” — where one kingdom of life (microbes) sacrifices part of its growth, while another (plants) gains resilience.
The global problem: declining sulphur in soils
Sulphur is essential for plant growth, just like nitrogen and phosphorus. It supports protein synthesis, vitamin production, and stress resistance.
Historically, sulphur pollution from industrial emissions replenished soils worldwide. But with cleaner energy and stricter air-quality regulations, atmospheric sulphur levels have dropped. While good for air quality and human health, this has unintentionally reduced natural sulphur deposits in agricultural soils.
Over time, crops have drawn down existing soil sulphur, leaving soils deficient. To compensate, farmers increasingly apply synthetic sulphur-based fertilisers. These short-term fixes come with costs: runoff from farmlands contaminates rivers, lakes, and ecosystems, exacerbating environmental degradation.
The new discovery: a microbial boost
The SCELSE-led study, published in Cell Host & Microbe on 26 September 2025, provides a novel mechanistic explanation of how plants and microbes jointly navigate nutrient stress. The researchers found that when soil bacteria compete for nutrients, they release glutathione — a compound that boosts plant growth under sulphur-deficient conditions, even though it reduces bacterial growth.
This improvement in plant fitness came at the cost of bacterial fitness — a biological trade-off across kingdoms of life.
“This work introduces the concept of a trans-kingdom fitness trade-off and provides a mechanistic explanation for it,” said first author Arijit Mukherjee, who was a PhD student at SCELSE and the NUS Department of Biological Sciences when the study was conducted. “Plant fitness isn’t just about the plant itself — it’s about the whole community of microbes around it. Understanding these trade-offs helps us design better microbial solutions for resilient crops.”
Why it matters
Such trade-offs are likely widespread across host–microbe systems, not just in plants, and may represent hidden strategies by which holobionts (hosts and their associated microbes) adapt collectively to environmental cues.
For agriculture, this insight is powerful: instead of relying on chemical fertilisers, researchers can design microbial consortia (or “cocktails”) that naturally boost crop health under nutrient stress. This nature-based solution can reduce fertiliser use, improve soil health, and contribute to global food security.
Assoc Prof Sanjay Swarup, Principal Investigator at SCELSE, explained, “This study provides a blueprint for sustainable agriculture. By tapping into natural plant–microbe partnerships, we can reduce fertiliser use, protect ecosystems, and still secure global food supplies.”
From discovery to application: patent filed
To translate this breakthrough into practice, the team has filed a patent covering applications of this plant–microbe mechanism in agriculture. This will enable the development of bio-based products that support crops in sulphur-deficient soils, reducing reliance on chemical inputs.
“By considering not only microbial functions but also their interactions, we can design more effective microbial consortia for agriculture,” added Assoc Prof Swarup, who is also the Deputy Director for NUS Environmental Research Institute (NERI) and a faculty member of the NUS Department of Biological Sciences. “This is the path toward resilient, climate-ready farming.”
Do cells contain a mechanism that decides on their fates? Researchers at ETH Zurich have demonstrated in a new study that large clusters of molecules determine a cell’s future.
Do cells contain a mechanism that decides on their fates? Researchers at ETH Zurich have demonstrated in a new study that large clusters of molecules determine a cell’s future.
The National University of Singapore (NUS) has appointed Professor Joseph Liow as the third Dean of the Lee Kuan Yew School of Public Policy (LKYSPP), with effect from 15 October 2025.A renowned scholar of international relations and Asian strategic affairs, Prof Liow brings to LKYSPP a distinguished record of academic excellence and institutional leadership experience. He will succeed Associate Professor Leong Ching who has served as Acting Dean of LKYSPP since 1 July 2025.Prior to joining NUS,
The National University of Singapore (NUS) has appointed Professor Joseph Liow as the third Dean of the Lee Kuan Yew School of Public Policy (LKYSPP), with effect from 15 October 2025.
A renowned scholar of international relations and Asian strategic affairs, Prof Liow brings to LKYSPP a distinguished record of academic excellence and institutional leadership experience. He will succeed Associate Professor Leong Ching who has served as Acting Dean of LKYSPP since 1 July 2025.
Prior to joining NUS, Prof Liow spent 28 years of his academic career at the Nanyang Technological University (NTU), progressing from a researcher to Chair Professor, and advancing towards senior leadership roles which include Dean of the NTU S. Rajaratnam School of International Studies, and Dean of the NTU College of Humanities, Arts and Social Sciences. Prof Liow has joined LKYSPP as the Wang Gungwu Professor in East Asian Affairs on 1 October 2025.
His research specialises in subjects of Muslim societies and politics in Southeast Asia, the international relations of Southeast Asia, US foreign policy, and the geopolitics of East Asia and the Indo-Pacific. Widely regarded as a leading voice on Southeast Asian affairs, Prof Liow’s insights on these matters have informed and influenced policymakers, diplomats, and international institutions across the region.
Beyond academia, Prof Liow has served as an advisor and commentator on regional and global policy issues, including engagements with international organisations, think tanks, and government agencies. His ability to bridge scholarly rigour with real-world relevance has made him a trusted interlocutor in global policy circles.
Prof Liow is currently Chairman of the Middle East Institute at NUS, a position he assumed in September 2024, and previously held the Tan Kah Kee Chair in Comparative and International Politics at NTU.
In his new role, Professor Liow will lead LKYSPP in advancing its mission to develop future public leaders, contribute cutting-edge research, and engage with pressing policy challenges across Asia and beyond.
NUS President, Professor Tan Eng Chye, said: “Professor Joseph Liow has built a stellar career in advancing academic thinking on regional and global affairs, and engaging with the public, policymakers, academics and students in constructive and impactful ways. His academic breadth, respected scholarship, outstanding leadership and genuine commitment to public service make him uniquely suited to lead the Lee Kuan Yew School of Public Policy into its next chapter. I am confident he will build on the School’s strong foundations and further its mission as a centre of excellence in public policy education, research and engagement.”
“The University also expresses its deep appreciation to Assoc Prof Leong for her strong stewardship and commitment to the School as Acting Dean during this period of transition. Her leadership at this pivotal time has put the School on a strong footing for the trajectory ahead,” Prof Tan added.
Reflecting on his appointment as Dean, Professor Liow said: “It is a privilege to join the Lee Kuan Yew School of Public Policy, an institution that plays a vital role in shaping policy discourse and public leadership. These are exciting yet challenging times for Asia and the rest of the world. As societies grapple with complex challenges and transitions on issues like sustainability, technology and geopolitics, public policy and governance has never been more critical or contested. I look forward to working closely with the School’s outstanding faculty, students, staff and alumni to build on its strong foundations and expand its regional and global contributions.”
Founded in 2004, the NUS Lee Kuan Yew School of Public Policy is one of Asia’s leading institutions for public policy education and research.
Please refer to Annex A for the biography of Prof Joseph Liow.
Associate Professor Gene-Wei Li has accepted the position of associate head of the MIT Department of Biology, starting in the 2025-26 academic year. Li, who has been a member of the department since 2015, brings a history of departmental leadership, service, and research and teaching excellence to his new role. He has received many awards, including a Sloan Research Fellowship (2016), an NSF Career Award (2019), Pew and Searle scholarships, and MIT’s Committed to Caring Award (2020). In 2024, he
Associate Professor Gene-Wei Li has accepted the position of associate head of the MIT Department of Biology, starting in the 2025-26 academic year.
Li, who has been a member of the department since 2015, brings a history of departmental leadership, service, and research and teaching excellence to his new role. He has received many awards, including a Sloan Research Fellowship (2016), an NSF Career Award (2019), Pew and Searle scholarships, and MIT’s Committed to Caring Award (2020). In 2024, he was appointed as a Howard Hughes Medical Institute (HHMI) Investigator.
“I am grateful to Gene-Wei for joining the leadership team,” says department head Amy E. Keating, the Jay A. Stein (1968) Professor of Biology and professor of biological engineering. “Gene will be a key leader in our educational initiatives, both digital and residential, and will be a critical part of keeping our department strong and forward-looking.”
A great environment to do science
Li says he was inspired to take on the role in part because of the way MIT Biology facilitates career development during every stage — from undergraduate and graduate students to postdocs and junior faculty members, as he was when he started in the department as an assistant professor just 10 years ago.
“I think we all benefit a lot from our environment, and I think this is a great environment to do science and educate people, and to create a new generation of scientists,” he says. “I want us to keep doing well, and I’m glad to have the opportunity to contribute to this effort.”
As part of his portfolio as associate department head, Li will continue in the role of scientific director of the Koch Biology Building, Building 68. In the last year, the previous scientific director, Stephen Bell, Uncas and Helen Whitaker Professor of Biology and HHMI Investigator, has continued to provide support and ensured a steady ramp-up, transitioning Li into his new duties. The building, which opened its doors in 1994, is in need of a slate of updates and repairs.
Although Li will be managing more administrative duties, he has provided a stable foundation for his lab to continue its interdisciplinary work on the quantitative biology of gene expression, parsing the mechanisms by which cells control the levels of their proteins and how this enables cells to perform their functions. His recent work includes developing a method that leverages the AI tool AlphaFold to predict whether protein fragments can recapitulate the native interactions of their full-length counterparts.
“I’m still very heavily involved, and we have a lab environment where everyone helps each other. It’s a team, and so that helps elevate everyone,” he says. “It’s the same with the whole building: nobody is working by themselves, so the science and administrative parts come together really nicely.”
Teaching for the future
Li is considering how the department can continue to be a global leader in biological sciences while navigating the uncertainty surrounding academia and funding, as well as the likelihood of reduced staff support and tightening budgets.
“The question is: How do you maintain excellence?” Li says. “That involves recruiting great people and giving them the resources that they need, and that’s going to be a priority within the limitations that we have to work with.”
Li will also be serving as faculty advisor for the MIT Biology Teaching and Learning Group, headed by Mary Ellen Wiltrout, and will serve on the Department of Biology Digital Learning Committee and the new Open Learning Biology Advisory Committee. Li will serve in the latter role in order to represent the department and work with new faculty member and HHMI Investigator Ron Vale on Institute-level online learning initiatives. Li will also chair the Biology Academic Planning Committee, which will help develop a longer-term outlook on faculty teaching assignments and course offerings.
Li is looking forward to hearing from faculty and students about the way the Institute teaches, and how it could be improved, both for the students on campus and for the online learners from across the world.
“There are a lot of things that are changing; what are the core fundamentals that the students need to know, what should we teach them, and how should we teach them?”
Although the commitment to teaching remains unchanged, there may be big transitions on the horizon. With two young children in school, Li is all too aware that the way that students learn today is very different from what he grew up with, and also very different from how students were learning just five or 10 years ago — writing essays on a computer, researching online, using AI tools, and absorbing information from media like short-form YouTube videos.
“There’s a lot of appeal to a shorter format, but it’s very different from the lecture-based teaching style that has worked for a long time,” Li says. “I think a challenge we should and will face is figuring out the best way to communicate the core fundamentals, and adapting our teaching styles to the next generation of students.”
Ultimately, Li is excited about balancing his research goals along with joining the department’s leadership team, and knows he can look to his fellow researchers in Building 68 and beyond for support.
“I’m privileged to be working with a great group of colleagues who are all invested in these efforts,” Li says. “Different people may have different ways of doing things, but we all share the same mission.”
Associate professor of biology, director of scientific operations in Building 68, and HHMI Investigator Gene-Wei Li was inspired to step into the role of associate department head in part because of the way MIT Biology facilitates career development during every stage.
Will Flintoft ’26.Niles Singer/Harvard Staff Photographer
Campus & Community
Flew home as Will Flintoft, returned as Rhodes Scholar
Applied math concentrator to study computer science, theology with eye toward AI
Matt Goisman
Harvard Correspondent
October 9, 2025
4 min read
Will Flintoft flew home to Australia for a few days last month and returned to campus as a Rhodes Scholar.
Th
Flew home as Will Flintoft, returned as Rhodes Scholar
Applied math concentrator to study computer science, theology with eye toward AI
Matt Goisman
Harvard Correspondent
4 min read
Will Flintoft flew home to Australia for a few days last month and returned to campus as a Rhodes Scholar.
The senior mathematics and philosophy double-concentrator will spend the next two years at the University of Oxford, where he plans to pursue advanced degrees in two fields: mathematics and foundations of computer science, and philosophical theology.
“Part of the trajectory I see for myself is lending a perspective which is both technically informed, but also deeply plugged into ethics,” said Flintoft, who is also pursuing a concurrent master’s degree in applied math at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “We seem to be at this really catalytic moment in the world with regards to large language models and machine learning more generally. There’s a lot of work to be done making sure that this kind of technology is being developed in a way that’s smart and also sensitive to the ways in which it can go wrong and where there can be pitfalls.”
Flintoft applied to the Rhodes Scholarship last August and was named a finalist for the one scholarship allocated to the Australian state of Victoria. He needed to complete two panel interviews — the first was virtual, but the second required a flight home.
“Both rounds of interviews were with the full selection panel, which is composed of former Rhodes scholars who have gone on to do really fascinating, cool things, as well as really important figures in the Australian community,” he said.
Flintoft found out that night that he’d been selected. That meant he got to celebrate with his family.
“It was a really special experience, but very brief, because I had to hop back on a plane to get back to Boston literally a couple of hours later,” he said. “It was a very rushed experience amid the delirium of jet lag. A lot of time zone changes, but it was a really wonderful experience.”
While most Rhodes Scholars receive their first graduate degrees at Oxford, Flintoft will arrive with one already completed. Applied math has provided him with tools to both deepen and better deploy his undergraduate education into the intersection of philosophy and technology.
“For those that want to push that little extra mile and really sink our teeth into truly difficult grad classes, which are often literally at the forefront of research, that kind of experience is very special,” he said. “Applied math specifically is a wonderfully versatile degree. It has intersections in biology, economics, physics, computer science, a whole host of different disciplines.”
At Oxford, Flintoft is especially interested in the philosophical and societal implications of artificial intelligence and fundamental questions such as what it means to be human when AI can improve on or replace so much of human behavior.
“There are going to be a whole host of really good benefits as a result of AI being deployed en masse in a society,” Flintoft said. “Productivity increases, and there’ll be benefits to people’s lives and livelihoods because there’ll be a whole bunch of quite complex services that will be a lot cheaper. But the important thing is that the transformation is done right, because AI will catalyze and accelerate. It’s important that that acceleration happens in the right direction.”
Flintoft’s time at Harvard reflects broad interdisciplinary interests. His extracurricular activities include being managing editor of the Harvard Review of Philosophy and editorial chair of the Harvard Undergraduate Law Review.
He is also a researcher in the Soft Math Lab of L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics at SEAS, where he studies complex behaviors in biology, such as the mathematics behind control of muscular hydrostats such as octopus arms.
Flintoft’s research raises philosophical questions about consciousness and conscious decision-making — questions Flintoft might very well get to answer over the next two years.
“I’ll get the chance to hone my interests a little bit more in these two areas that are really important to me,” he said. “Additionally, the Rhodes Scholarship is designed to immerse scholars in a community of other scholars who are also all public service-oriented and really want to maximize the impact that their study can have on the world. I think that connection between what we learn in the classroom and the way that we then go on to bring good into the world is really important.”
Science & Tech
What will AI mean for humanity?
E. Glen Weyl (second from right), shares his more optimistic view of technology during the panel discussion “How Is Digital Technology Shaping the Human Soul?” Panelists included Moira Weigel (from right), Nataliya Kos’myna, Brandon Vaidyanathan, and moderator Ian Marcus Corbin.Photos by Veasey Conway/Harvard Staff Photographer
Clea Simon
Harvard Correspondent
October
E. Glen Weyl (second from right), shares his more optimistic view of technology during the panel discussion “How Is Digital Technology Shaping the Human Soul?” Panelists included Moira Weigel (from right), Nataliya Kos’myna, Brandon Vaidyanathan, and moderator Ian Marcus Corbin.
Photos by Veasey Conway/Harvard Staff Photographer
Clea Simon
Harvard Correspondent
6 min read
Scholars from range of disciplines see red flags, possibilities ahead
What does the rise of artificial intelligence mean for humanity? That was the question at the core of “How is digital technology shaping the human soul?,” a panel discussion that drew experts from computer science to comparative literature last week.
The Oct. 1 event was the first from the Public Culture Project, a new initiative based in the office of the dean of arts and humanities. Program Director Ian Marcus Corbin, a philosopher on the neurology faculty of Harvard Medical School, said the project’s goal was putting “humanist and humanist thinking at the center of the big conversations of our age.”
“Are we becoming tech people?” Corbin asked. The answers were varied
“We as humanity are excellent at creating different tools that support our lives,” said Nataliya Kos’myna, a research scientist with the MIT Media Lab. These tools are good at making “our lives longer, but not always making our lives the happiest, the most fulfilling,” she continued, listing examples from the typewriter to the internet.
Generative AI, specifically ChatGPT, is the latest example of a tool that essentially backfires in promoting human happiness, she suggested.
She shared details of a study of 54 students from across Greater Boston whose brain activity was monitored by electroencephalography after being asked to write an essay.
Nataliya Kos’myna (right) with panelist Brandon Vaidyanathan.
One group of students was allowed to use ChatGPT, another permitted access to the internet and Google, while a third group was restricted to their own intelligence and imagination. The topics — such as “Is there true happiness?” — did not require any previous or specialized knowledge.
The results were striking: The ChatGPT group demonstrated “much less brain activity.” In addition, their essays were very similar, focusing primarily on career choices as the determinants of happiness.
The internet group tended to write about giving, while the third group focused more on the question of true happiness.
Questions illuminated the gap. All the participants were asked whether they could quote a line from their own essays, one minute after turning them in.
“Eighty-three percent of the ChatGPT group couldn’t quote anything,” compared to 11 percent from the second and third groups. ChatGPT users “didn’t feel much ownership,” of their work. They “didn’t remember, didn’t feel it was theirs.”
“Your brain needs struggle,” Kos’myna said. “It doesn’t bloom” when a task is too easy. In order to learn and engage, a task “needs to be just hard enough for you to work for this knowledge.”
E. Glen Weyl, research lead with Microsoft Research Special Projects, had a more optimistic view of technology. “Just seeing the problems disempowers us,” he said, urging instead for scientists to “redesign systems.”
He noted that much of the current focus on technology is on its commercial aspect. “Well, the only way they can make money is by selling advertising,” he said, paraphrasing prevailing wisdom before countering it. “I’m not sure that’s the only way this can be structured.”
“Underlying what we might call scientific intelligence there is a deeper, spiritual intelligence — why things matter.”
Brandon Vaidyanathan
Citing works such as Steven Pinker’s new book, “When Everyone Knows That Everyone Knows,” Weyl talked about the idea of community — and how social media is more focused on groups than on individuals.
“If we thought about engineering a feed about these notions, you might be made aware of things in your feed that come from different members of your community. You would have a sense that everyone is hearing that at the same time.”
This would lead to a “theory of mind” of those other people, he explained, opening our sense of shared experiences, like that shared by attendees at a concert.
To illustrate how that could work for social media, he brought up Super Bowl ads. These, said Weyl, “are all about creating meaning.” Rather than sell individual drinks or computers, for example, we are told “Coke is for sharing. Apple is for rebels.”
“Creating a common understanding of something leads us to expect others to share the understanding of that thing,” he said.
To reconfigure tech in this direction, he acknowledged, “requires taking our values seriously enough to let them shape” social media. It is, however, a promising option.
Moira Weigel, an assistant professor in comparative literature at Harvard, took the conversation back before going forward, pointing out that many of the questions discussed have captivated humans since the 19th century.
Weigel, who is also a faculty associate at the Berkman Klein Center for Internet and Society, centered her comments around five questions, which are also at the core of her introductory class, “Literature and/as AI: Humanity, Technology, and Creativity.”
“What is the purpose of work?” she asked, amending her query to add whether a “good” society should try to automate all work. “What does it mean to have, or find, your voice? Do our technologies extend our agency — or do they escape our control and control us? Can we have relationships with things that we or other human beings have created? What does it mean to say that some activity is merely technical, a craft or a skill, and when is it poesis” or art?
Looking at the influence of large language models in education, she said, “I think and hope LLMs are creating an interesting occasion to rethink what is instrumental. They scramble our perception of what education is essential,” she said. LLMs “allow us to ask how different we are from machines — and to claim the space to ask those questions.”
Brandon Vaidyanathan, a professor of sociology at Catholic University of America, also saw possibility.
Vaidyanathan, the panel’s first speaker, began by noting the difference between science and technology, citing the philosopher Martin Heidegger’s concept of “enframing” has tech viewing everything as “product.”
Vaidyanathan noted that his experience suggests scientists take a different view.
“Underlying what we might call scientific intelligence there is a deeper, spiritual intelligence — why things matter,” he said.
Instead of the “domination, extraction, and fragmentation” most see driving tech (and especially AI), he noted that scientists tend toward “the three principles of spiritual intelligence: reverence, receptivity, and reconnection.” More than 80 percent of them “encounter a deep sense of respect for what they’re studying,” he said.
Describing a researcher studying the injection needle of the salmonella bacteria with a “deep sense of reverence,” he noted, “You’d have thought this was the stupa of a Hindu temple.
“Tech and science can open us up to these kind of spiritual experiences,” Vaidyanathan continued.
“Can we imagine the development of technology that could cultivate a sense of reverence rather than domination?” To do that, he concluded, might require a “disconnect on a regular basis.”
After previews for students, members, faculty and staff, the museum will open its doors to all — with free admission, as always — starting with a 24-hour public opening from 5 p.m. Friday, Oct. 31, to 5 p.m. Saturday, Nov. 1.
After previews for students, members, faculty and staff, the museum will open its doors to all — with free admission, as always — starting with a 24-hour public opening from 5 p.m. Friday, Oct. 31, to 5 p.m. Saturday, Nov. 1.
Harvard University. Photo by Dylan Goodman
Campus & Community
Tai Tsun Wu, 90
Memorial Minute — Faculty of Arts and Sciences
October 9, 2025
4 min read
At a meeting of the Faculty of Arts and Sciences on Oct. 7, 2025, the following tribute to the life and service of the late Tai Tsun Wu was spread upon the permanent records of the Faculty.
Professor Tai Tsun Wu was a formidable member of both the
At a meeting of the Faculty of Arts and Sciences on Oct. 7, 2025, the following tribute to the life and service of the late Tai Tsun Wu was spread upon the permanent records of the Faculty.
Professor Tai Tsun Wu was a formidable member of both the School of Engineering and Applied Sciences (SEAS) and the Department of Physics at Harvard. At the age of 22, his ground-breaking research on antenna theory under the direction of Professor Ronold W. P. King established him as one of the leading experts in this important field. The remarkable breadth of Wu’s research interests over the course of his career was underpinned by his exceptional mathematical abilities. Although he shifted the main thrust of this research to fundamental problems in physics, he continued for years to be active in solving basic electricity and magnetism problems that arise in antenna theory.
During Wu’s subsequent productive career, he pursued a long collaboration with Hung Cheng, a professor at the Massachusetts Institute of Technology (MIT). Their extraordinary study of the high-energy behavior in quantum field theory illuminated properties of renormalization theory and resulted in the prediction of rising total cross-section of hadron scattering. Wu’s work on statistical mechanics models with Barry McCoy, Craig Tracy, and others led to different insights, including finding a closed-form solution for correlation functions of the scaling limit of the Ising model, ostensibly an exact quantum field theory. In 1975 Wu collaborated with C.N. Yang to reformulate the theory of monopoles, leading to what is now referred to as the “Wu-Yang dictionary.” Professor Wu advised many graduate students and had numerous collaborators, including John Myers.
Wu’s research led to his recognition in many ways. Among other achievements, he received the Dannie Heineman Prize for Mathematical Physics and the Alexander von Humboldt Foundation Prize. He was elected to the American Academy of Arts and Sciences and to the Academia Sinica. Wu taught courses in applied mathematics and in physics in the Department of Physics and in SEAS. He was known to students and colleagues as an accessible expert on mathematical methods. Wu shunned the limelight and the pursuit of recognition; rather, he constantly focused on his research. For this reason, despite the fact that he authored over 400 publications, including six books, his work is not as widely known as it should be.
Wu was born on Dec. 1, 1933, in Shanghai, China. He came to the United States to study as an undergraduate at the University of Minnesota, where, in 1953, he won the William Lowell Putnam Mathematical Competition. This national competition for undergraduates has a Harvard connection: it was established by Elizabeth Lowell Putnam in honor of her husband and it offers a Harvard graduate school scholarship to one of the top winners each year. Wu applied to Harvard as the first Putnam Fellow from the University of Minnesota. A story that is still told is that members of the Department of Mathematics had naturally assumed that young Wu would be joining them, but he had applied to study applied physics. His doctoral thesis led to his election to the Society of Fellows at the age of 22 and to his appointment in the Harvard Faculty of Arts and Sciences at age 25, where he remained until 2021, when he became an emeritus professor.
Wu was fond of several local Chinese restaurants, where many friends were his guests for lunch or dinner. One of these guests recalled being feasted by Wu with a dinner that included chicken feet, an unusual experience. Wu expressed his generosity in many other ways. One new faculty member arrived at Harvard without a car just as Wu was about to leave for a sabbatical. Wu insisted that his young colleague drive his Dodge Dart until he returned.
While at Harvard, Wu became acquainted with Sau Lan Yu, a graduate student in experimental physics. They married on June 18, 1967, in the Harvard Memorial Church. Sau Lan went on to become distinguished for her role in the discovery of the J/𝜓 particle with Samuel Ting at Brookhaven National Laboratory, as well as for leading many experiments with the European Organization for Nuclear Research (CERN). The Wu family spent much time both in Cambridge and Europe. In 2022 they sold their Cambridge house and moved to Palo Alto, California, where Wu died in the Stanford University hospital on July 19, 2024.
Respectfully submitted,
Hung Cheng (MIT) Sheldon Glashow John Hutchinson Arthur Jaffe, Chair
Portions of this Minute were previously published: Arthur Jaffe, “Tai Tsun Wu (1933-2024),” Department of Physics’ website, July 23, 2024, https://www.physics.harvard.edu/news/tai-tsun-wu-1933-2024 [accessed Aug. 11, 2025].
A Harvard gate alongside Quincy Street.Stephanie Mitchell/Harvard Staff Photographer
Campus & Community
Richard Goody, 102
Memorial Minute — Faculty of Arts and Sciences
October 9, 2025
5 min read
At a meeting of the Faculty of Arts and Sciences on Oct. 7, 2025, the following tribute to the life and service of the late Richard Goody was spread upon the permanent records of the Faculty.
With a rema
At a meeting of the Faculty of Arts and Sciences on Oct. 7, 2025, the following tribute to the life and service of the late Richard Goody was spread upon the permanent records of the Faculty.
With a remarkable life spanning more than a century, 1921 to 2023, and a scientific career embracing seven decades, Richard Goody successfully bridged experimental observations with theory that fostered unprecedented advances in our understanding of the Earth’s troposphere-stratosphere coupling, of the structure and function of the atmospheres of Venus and Mars, and of the intricacies of the quantum mechanics of molecular spectra. His high-resolution spectral analysis of molecules and nonequilibrium thermodynamics brought remarkable insight to what proved to be the context for climate change. Moreover, Goody possessed an innate sense for leading the development of strategic approaches at Harvard, which advanced the University’s intellectual structure and led to the modern union represented first by the Center for Earth and Planetary Physics (CEPP), the predecessor of the current Department of Earth and Planetary Sciences and the area of Environmental Science and Engineering.
Remarkably, a number of Goody’s intellectual dimensions were present in his first experimental endeavor immediately following the Second World War. As a graduate student at the University of Cambridge, Goody designed and built an infrared spectrometer to obtain measurements of water vapor in the Earth’s stratosphere. The spectrometer operated from a wooden bomber, the Mosquito, capable of altitudes approaching 40,000 ft. and powered by two 3,000 hp engines. Despite the extreme levels of noise and vibration, the aerodynamic instability of the aircraft, and the need to acquire a solar image on the center of the spectrometer’s entrance slit, his successful infrared spectrum of the Sun yielded a determination of the water vapor concentration in the stratosphere. This profound accomplishment set a benchmark for the unprecedented observations and theoretical foundations for the quantitative interpretation of the interactions of photons with molecular structures, which defined his scientific career.
Goody moved from the U.K. to Harvard in 1958, when studies of the Earth and of Space systems were rapidly expanding as the U.S. and the Soviet Union increasingly engaged in the Cold War. The modern era of leadership in Earth and Planetary Physics was born when he founded the Harvard CEPP, providing unprecedented support for these studies and initiating strategic approaches that resulted in increasingly sophisticated observations and modeling of the planets and of the Earth’s atmospheric, oceanic, and biological systems. Goody brought Michael McElroy to Harvard to join the CEPP in 1970, advancing both planetary studies and aeronomy (the study of the Earth’s atmosphere and of its union with the solar system and interstellar processes). McElroy, it would turn out, profoundly broadened and deepened the intellectual research structure at Harvard, as well as the architecture of the educational design that has carried forward to the present. McElroy, Steven Wofsy, and Yuk Yung were central to the introduction of halogen species into studies of catalytic loss of stratospheric ozone, but the intellectual union of Goody, McElroy, Yung, and Wofsy extended across multiple domains of atmospheric radiation, the photochemical structures of planetary atmospheres, and perturbations to the Earth’s atmosphere by human effluents.
In parallel with Goody’s vision of embracing the rapidly expanding manifold of intellectual pursuits was a consistent focus on advancing the fundamental understanding of atmospheric radiation, by virtue of multiple publications and the release of two classic textbooks: (1) “Principles of Atmospheric Physics and Chemistry” and (2) “Atmospheric Radiation: Theoretical Basis,” co-authored with Yung, who became a Professor of Planetary Sciences at the California Institute of Technology. These textbooks are central to undergraduate and graduate curricula of universities internationally. When James Anderson was drawn to Harvard by Goody, McElroy, and Dudley Herschbach (in the [then] Department of Chemistry), the CEPP began to experiment with determining the concentrations of the major free radicals involved in stratospheric ozone loss, engaging a new class of in situ observations from stratospheric balloon and aircraft platforms.
The innovation and intellectual agility that the CEPP brought to Earth system studies broadened with time under the leadership of McElroy, leading to the initiation of the new Department of Earth and Planetary Sciences from the Department of Geology, as well as the Harvard University Center for the Environment and the multidisciplinary program Environmental Sciences and Public Policy. Thanks to the vision of scientists like Goody and McElroy, who recognized that a multitude of intellectual disciplines was required to address the field of Earth and planetary sciences, there are now over 30 faculty teaching and researching in this area at Harvard.
Remarkably, Goody’s “retirement” in 1991, at age 70, marked the beginning of a major new phase in his scientific career. After becoming Professor Emeritus at Harvard, he continued his work, which started in 1977, as the Distinguished Visiting Scientist at the Jet Propulsion Laboratory (JPL) in Pasadena, California — a highly productive partnership that lasted for over three decades. Goody maintained very active involvement with scientific developments at Harvard. He released his classic textbook “Principles of Atmospheric Physics and Chemistry” in 1995, which, with its emphasis on irreversibility, entropy, and the Carnot cycle, served to establish the critical role of thermodynamics to a new generation as the serious consequences of climate change were rapidly intensifying.
In yet another dimension during this “retirement” period, Goody led a collaboration with Anderson, Gerald North (Texas A&M University), and Kuo-Nan Liou (University of California, Los Angeles), envisioning a new climate observing system by engaging the absolute calibration of a high-resolution infrared spectrometer that could establish subtle (and not-so-subtle) changes in the radiation emitted from the Earth to Space with an accuracy of 50 mK from orbit. The strategy also engaged GPS radio occultation. This effort led to the creation of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission.
Respectfully submitted,
Michael McElroy Steven Wofsy Yuk L. Yung (California Institute of Technology) James Anderson, Chair
A total of 48 projects from across the UK are receiving funding from a new £9 million proof of concept programme to support and accelerate the development of new or improved technologies, products, processes and services. The aim of the UK Research and Innovation (UKRI) fund is to use research to drive growth and create the jobs of tomorrow.
The four Cambridge projects receiving funding exemplify the University's commitment to translating world-class research into practical solutions that addre
A total of 48 projects from across the UK are receiving funding from a new £9 million proof of concept programme to support and accelerate the development of new or improved technologies, products, processes and services. The aim of the UK Research and Innovation (UKRI) fund is to use research to drive growth and create the jobs of tomorrow.
The four Cambridge projects receiving funding exemplify the University's commitment to translating world-class research into practical solutions that address global challenges in health, sustainability, and inclusion.
CamBoom: championing inclusion in cricket with engineered bamboo bats
Pioneered by Dr Darshil Shah, Associate Professor in Materials Science and Design in the Department of Architecture, this innovation aims to achieve an inclusive and sustainable future for cricket by developing low-cost bamboo bats, meeting the needs of millions of players in low and middle-income countries.
AI-based coronary artery analysis
Professor Martin Bennett, British Heart Foundation Chair of Cardiovascular Sciences in the Department of Medicine, is using AI to advance medical diagnostics, improving the accuracy and efficiency of coronary artery analysis.
Pre-clinical development of orally-administered, ultra-stable antibody mimetics
This initiative, led by Professor Mark Howarth and Dr Ana Rossi at the Department of Pharmacology, focuses on new treatments for gastrointestinal conditions, using innovative antibody mimetics that can be administered orally.
Sustainable film packaging from plant waste
Professors James Elliott, Ruth Cameron and Serena Best from the Department of Materials Science and Metallurgy have developed a new way of creating sustainable cellulose-based films at scale from waste plant material, with a range of applications from food and personal care packaging to anti-static discharge bags.
Professor John Aston, Pro-Vice-Chancellor for Research at the University of Cambridge, said: “Turning Cambridge research into innovations that will change people’s lives is at the heart of our mission. That four Cambridge projects have received UKRI proof of concept funding is a tribute both to the excellence of our researchers and to the support provided by our innovation arm, Cambridge Enterprise, in helping to translate their new ideas into effective solutions to global challenges.”
Dr Jim Glasheen, Chief Executive of Cambridge Enterprise, added: “The strength of Cambridge research lies not only in its scientific excellence but in our ability to translate discoveries into real-world impact. These projects are a great example of this strength, and showcase the University’s leadership in research translation and innovation. Funding of this kind is vital for nurturing breakthrough ideas and delivering lasting impact.”
This funding provides critical early-stage support to projects, helping researchers and innovators bridge the gap before attracting private investment, reducing the risks associated with premature market entry.
Of the 48 projects receiving funding, Professor Charlotte Deane, UK Research and Innovation’s (UKRI) Research Commercialisation Executive Champion, said: "These projects are a powerful demonstration of the UK’s talent for turning cutting-edge research into real-world solutions. UKRI’s new proof of concept programme is all about helping researchers take that critical next step toward commercialisation, ensuring that bold ideas are not just published but put into practice where they can deliver tangible impact."
Following last year’s rejection of the expansion programme for Switzerland’s national highways, and the financial difficulties in the 2035 rail expansion service concept, Swiss transport policy is at a crossroads. On behalf of the Federal Department of the Environment, Transport, Energy and Communications (DETEC), ETH Zurich has prioritised around 500 planned projects for road, rail and urban transport. Ulrich Weidmann, Professor of Transport Systems, explains the key insights.
Following last year’s rejection of the expansion programme for Switzerland’s national highways, and the financial difficulties in the 2035 rail expansion service concept, Swiss transport policy is at a crossroads. On behalf of the Federal Department of the Environment, Transport, Energy and Communications (DETEC), ETH Zurich has prioritised around 500 planned projects for road, rail and urban transport. Ulrich Weidmann, Professor of Transport Systems, explains the key insights.
Finance can bridge the gap between climate science and business decision-making – and communication, innovation and education are critical, according to a panel of experts convened by the Cornell SC Johnson College of Business and the Cornell Atkinson Center for Sustainability during Climate Week 2025.
Finance can bridge the gap between climate science and business decision-making – and communication, innovation and education are critical, according to a panel of experts convened by the Cornell SC Johnson College of Business and the Cornell Atkinson Center for Sustainability during Climate Week 2025.
Independent analysis by the Observatory for Mathematical Education (OME) found that the specialist sixth forms are not only boosting attainment and progression, but also significantly widening participation in STEM.
Cambridge Maths School was opened in September 2023 by the Eastern Learning Alliance (ELA) – a multi-academy trust with schools across Cambridgeshire and East Anglia – in collaboration with the University of Cambridge. In August this year, it celebrated its first students’ A-level r
Independent analysis by the Observatory for Mathematical Education (OME) found that the specialist sixth forms are not only boosting attainment and progression, but also significantly widening participation in STEM.
Cambridge Maths School was opened in September 2023 by the Eastern Learning Alliance (ELA) – a multi-academy trust with schools across Cambridgeshire and East Anglia – in collaboration with the University of Cambridge. In August this year, it celebrated its first students’ A-level results, with more than half of the grades (53%) awarded at A*.
According to the new OME report – looking at the impact of maths schools across the country, 10 years after the first centres opened – female students, those from under-represented ethnic groups, and those from low socio-economic backgrounds all progress at higher rates to mathematically intensive STEM degrees than comparable peers elsewhere. Maths school students are also more likely to achieve the highest grades in A-level mathematics and further mathematics, and progress to the UK’s most selective STEM universities, including Oxbridge, at significantly higher rates than their matched peers.
The first maths schools launched in 2014 with the principal aim of helping prepare more of the country’s most mathematically able students to succeed in maths disciplines at top universities, and address the UK’s skills shortage in STEM subjects. There are now 11 maths schools in the University Maths School Network. Nine are open, with two more planned – in the North East (Durham University) and East Midlands (University of Nottingham) – both currently awaiting government approval. If confirmed, every region of England will have at least one maths school.
Clare Hargraves, Headteacher at Cambridge Maths School, said: "At Cambridge Maths School, we see every day how transformative a deep mathematical education can be. This report confirms what we witness in our classrooms: that with the right support, young people from all backgrounds can thrive, excel, and shape the future through mathematics."
Rajen Shah, Professor of Statistics at the University of Cambridge, and a governor at Cambridge Maths School, said: "A mathematical education can really flourish when curiosity and collaboration are at the heart of learning. The Cambridge Maths School offers exactly that environment, and the exceptional outcomes achieved by its students show what is possible when talent is nurtured in this way. The University of Cambridge is delighted to continue supporting the school in its mission to help young people from all backgrounds develop a lasting passion and confidence in mathematics."
Lucy Scott, CEO of the Eastern Learning Alliance said: "We are delighted to see such strong evidence that University Maths Schools are delivering on their shared promise: opening up access to mathematics at the highest level for all young people, regardless of their background. It’s particularly encouraging to see the impact for groups traditionally under-represented in the subject. This is what the Cambridge Maths School was created to do, and I’d like to extend my heartfelt thanks to all our staff who work tirelessly every day to ensure that vision becomes a reality."
Dan Abramson, CEO of the University Maths Schools Network, said: "University Maths Schools give students with a spark for maths the chance to thrive, whatever their background. Ten years on from their establishment, this study proves that the schools are fulfilling their mission to be engines of social mobility and nurture a new generation of mathematical scientists."
University maths schools are driving mobility and success in mathematics across England, a new report has found.
The University of Cambridge is delighted to continue supporting the School in its mission to help young people from all backgrounds develop a lasting passion and confidence in mathematics.
By Asst Prof Reuben Ng from the Lee Kuan Yew School of Public Policy at NUS and Lead Scientist at the Lloyd’s Register Foundation Institute for the Public Understanding of Risk at NUSThe Straits Times, 8 October 2025, Opinion, pB2
By Asst Prof Reuben Ng from the Lee Kuan Yew School of Public Policy at NUS and Lead Scientist at the Lloyd’s Register Foundation Institute for the Public Understanding of Risk at NUS
CNA, 7 October 20258world Online, 7 October 2025Suria News Online, 7 October 2025CNA938, 7 October 2025CNA Online, 8 October 2025The Straits Times, 8 October 2025, Singapore, pA14The Business Times, 8 October 2025, p28Lianhe Zaobao, 8 October 2025, Focus, p3Berita Harian, 8 October 2025, p2Tamil Murasu, 8 October 2025, p3
The potential for future breakthroughs benefitting society was the cornerstone of the Distinguished Undergraduate Research Prize (DURP) and Outstanding Undergraduate Researcher Prize (OURP) which were awarded to 47 NUS undergraduate research projects this year. A total of 91 entries were submitted and these projects were rigorously evaluated based on stringent criteria, such as impact factor, understanding of the subject, evidence of critical and independent thinking, originality and significanc
The potential for future breakthroughs benefitting society was the cornerstone of the Distinguished Undergraduate Research Prize (DURP) and Outstanding Undergraduate Researcher Prize (OURP) which were awarded to 47 NUS undergraduate research projects this year. A total of 91 entries were submitted and these projects were rigorously evaluated based on stringent criteria, such as impact factor, understanding of the subject, evidence of critical and independent thinking, originality and significance, and accolades received. Two outstanding projects were awarded the DURP and 45 projects received the OURP.
Established almost 20 years ago, the OURP recognises students who demonstrate outstanding participation and achievement in research. These exemplary research projects were undertaken through a range of initiatives, including the Undergraduate Research Opportunities Programme (UROP) and Field Service Projects. The DURP was launched in Academic Year 2023/24 to recognise top-ranked research projects in OURP.
NUS News highlights three student projects from the NUS School of Computing (NUS Computing) and the NUS Yong Loo Lin School of Medicine (NUS Medicine), that tackles real-world issues, such as hepatocellular carcinoma, cardiovascular risk factors, and the testing of concurrent software.
Advancing liver cancer diagnosis and treatment
Lin Hong Yi, who had recently graduated from NUS Medicine, won the DURP (Individual Category) for his project on primary liver cancer, or hepatocellular carcinoma (HCC). This type of cancer is challenging to treat because of its complex composition from varied gene expression. Conducted as part of the larger Precision Medicine In Liver Cancer Across An Asia-Pacific Network 2.0 (PLANet) programme, Hong Yi’s study analysed human liver cancer tissues in great detail, focusing on both gene activity and a specific type of “epigenetic switch” that helps control whether genes are turned on or off.
He discovered several cancer-promoting genes that are switched on through these epigenetic mechanisms, with liver cancers caused by different factors — such as Hepatitis B virus infection or fatty liver disease — showing distinct patterns of gene activation. Certain regions of the genome that are usually inactive in normal cells but appear to play a key role in activating cancer genes in HCC were also identified. These findings could potentially pave way for future studies to develop better tests, earlier detection, and more personalised treatments for HCC.
In the study, Hong Yi extracted DNA from cancer cells and used a combination of chemical treatments and ultrasonic waves to isolate specific regions of interest. The samples were then decoded using advanced sequencing technology, with the genetic information analysed with powerful computer algorithms to identify gene regions that are commonly activated by these epigenetic switches.
Enhancing cardiovascular health
NUS Medicine students Yiming Chen, Srinithy Nagarajan, Jayanth Jayabaskaran and NUS Medicine alumna Rachel Goh received the DURP (Group Category) for their project “The Global Syndemic of Modifiable Cardiovascular Risk Factors projected from 2025 to 2050”.
Cardiovascular diseases (CVDs) are the leading cause of global mortality and understanding the trends driving CVDs is essential in designing effective countermeasures. This group project forecasts trends in five key modifiable cardiovascular risk factors — high systolic blood pressure, low-density lipoprotein cholesterol, body mass index, fasting glucose levels, and tobacco use — over the next 25 years till 2050. Modifiable risk factors refer to factors that can be controlled by individuals, in contrast to non-modifiable factors such as genetic risks.
The group found that despite improved management of CVDs leading to a projected decline in rates of disability-adjusted life years (a measure of the overall burden of disease in a population) across all cardiovascular risk factors, the overall disability-adjusted life years will continue to rise due to population growth and ageing. High systolic blood pressure and BMI are the fastest-growing contributors to these trends, emphasising the urgent need for tailored, region- and demographic-specific cardiovascular prevention and intervention to curb global cardiometabolic risk.
The historical and projection data for the study was taken from the Institute for Health Metrics and Evaluation Global Burden of Disease database, with the largest contributors measured based on deaths and disability associated life years (i.e. years of life lost to disability and death from disease) attributable to each risk factor. Further studies are also being conducted to examine the impacts of these risk factor trends on future cardiovascular health.
Boosting software reliability
Jed Koh, a Computer Science major at NUS Computing was awarded the OURP in the Individual Category for his project “Property Testing for Trace Regular Languages”. He conducted the research project as part of UROP last year when he was a third-year student. Working with Assistant Professor Umang Mathur, Presidential Young Professor at NUS Computing, on concurrency testing and trace languages, Jed combined both theoretical depth and practical applications for his project.
To meet the rising demands of modern software such as cloud services and web browsers, concurrency is increasingly leveraged, enabling different components of a software application to run simultaneously for a shared task. However, this exponentially increases the number of ways in which individual components interact, concealing software bugs which are notoriously hard to find.
Jed’s project, one of the top-ranked OURP projects, aims to enhance the reliability of modern software and detect concurrency bugs early in the software development cycle. Existing techniques that attempt to find these bugs tend to be extremely resource-intensive, requiring a lot of time, computing and monetary resources. Hence, he proposed an algorithmic framework to build fast, scalable, randomised algorithms to detect a large class of concurrency bugs that routinely impact software correctness and usability. As part of the formal analysis of these algorithms, he proved mathematical theorems that show that the error rates of these algorithms can be made arbitrarily small.
Experience the Excitement of Research
Could you be the next to make a similarly groundbreaking discovery? Extolling the additional benefits of research, Mr John Caines, Assistant Senior Manager, Research Experience (REx) said, “Undertaking research activities is incredibly meaningful and valuable — not just because of the potential outsized impact on solving real-life problems, but also educational and employability advantages when entering the workforce.”
To explore a multiverse of possibilities in research with UROP and REx today, visit NUS Undergraduate Research to learn more.
Ranked eleventh, ETH Zurich is once again placed among the world’s best universities in this year’s Times Higher Education (THE) World University Rankings. This means it remains the highest-ranked university outside the Anglo-Saxon sphere.
Ranked eleventh, ETH Zurich is once again placed among the world’s best universities in this year’s Times Higher Education (THE) World University Rankings. This means it remains the highest-ranked university outside the Anglo-Saxon sphere.
NUS students seeking career and internship opportunities will now benefit from a refreshed format for the NUS Career Fest, which will be held twice annually starting this academic year. The revamped flagship event organised by the NUS Centre for Future-ready Graduates (CFG) aims to enhance support for students in their career journeys by providing them with more opportunities to engage with employers, boost their professional development, and prepare for the workforce.“We are encouraged that emp
NUS students seeking career and internship opportunities will now benefit from a refreshed format for the NUS Career Fest, which will be held twice annually starting this academic year.
The revamped flagship event organised by the NUS Centre for Future-ready Graduates (CFG) aims to enhance support for students in their career journeys by providing them with more opportunities to engage with employers, boost their professional development, and prepare for the workforce.
“We are encouraged that employer demand remains strong despite the job market uncertainties and disruptions AI brings to entry-level roles. More than 230 organisations are participating in NUS Career Fest Oct 2025. By running NUS Career Fest twice yearly, alongside faculty-specific fairs and combined with a vibrant, year-round campus recruitment calendar, we are creating more touchpoints for students to connect with employers. Ahead of the fair, CFG conducted workshops with a focus on strengthening the life skills as well as AI readiness including how to use AI responsibly and effectively in job search and early career roles. These programmes are designed to help our students stand out, learn boldly and adapt confidently into a fast-moving work world”, said Joan Tay, Senior Director, NUS Centre for Future-ready Graduates.
A theme for the times: From Gen Z to Gen AI
Held from 8 to 9 October 2025 at two venues - the Stephen Riady Centre in NUS University Town and the Engineering Auditorium Atrium at the College of Design and Engineering - this year’s fair carried the timely theme “From Gen Z to Gen AI: The Prompt to Your Next Great Career”.
The theme underscores how today’s graduates are entering a workforce where generational strengths such as creativity, adaptability and digital fluency intersect with the transformative potential of artificial intelligence (AI).
Over the two days, students have the opportunity to meet with more than 230 participating employers across 20 industries, ranging from established sectors such as finance, engineering, and the public service, to fast-growing domains like AI, sustainability, and digital commerce.
On day one, thousands of students streamed through the booths, exploring internships, full-time roles, and career pathways they might not have considered before. Christopher Zhang, a Year 1 student from Faculty of Arts and Social Sciences reflected: “I think it’s a great idea to hold the fair in October because it aligns well with the cycle of hiring interns. Many companies start preparing for internship recruitment during the semester break in January, when listings are released. Attending the fair early gives me the chance to plan ahead, start shortlisting opportunities, preparing my résumé, and thinking about the types of internships I want to pursue. When February comes around, it’s more about refining the details and identifying specific companies I’d like to join. The experience also made me realise how important it is to stand out, to secure a role in my desired industry. It’s not just about being prepared but about understanding what more I can do to differentiate myself from others.”
Employers, in turn, were eager to connect with NUS talent. Pearlyn Wong, Talent Acquisition Senior Manager at SimplifyNext, said, “Our experience with NUS students has been genuinely energising. They bring a strong foundation in analytical thinking and an impressive ability to connect ideas across disciplines – from technology and data to human behaviour and design. What really stood out was how they responded to experiential challenges. In settings like our Agentic AI Hackathon or internships, NUS students quickly move from theory to practice – asking thoughtful questions and showing the kind of intellectual curiosity that thrives in an environment like SimplifyNext. We look for people who are not just good at solving problems, but who reframe them. And that’s something we’ve consistently seen among NUS students. Their adaptability, teamwork, and willingness to learn by doing mirror the same values that drive our own teams working on complex, AI-driven transformation projects.”
Many employers emphasised that beyond technical expertise, they are seeking graduates with critical thinking, adaptability, and learning agility.
Harnessing innovation in career preparation
A highlight of this October’s Career Fest was the launch of the CFG AI-Xplore chatbot, an AI-powered tool designed to provide students with instant access to career resources and guidance – from resume tips to sector-specific opportunities. Complementing in-person career advisory sessions, the chatbot supports NUS’ commitment to prepare students to thrive in an AI-augmented world.
Also drawing student interest was the CFG Career Quest game, created by Woon Cher Han, a Year 4 student from the School of Computing, which scored the winning entry in a CFG-organised competition earlier this year. In the game, players step into the shoes of a fresh graduate battling the Monster of Doubt and Fear, using CFG’s suite of offerings to overcome challenges. The interactive storyline resonated strongly with peers who recognised their own uncertainties mirrored in the gameplay.
Alongside these innovations, a range of fringe activities helped students sharpen their personal brand and networking skills. At the Career Pitch Stop, students practised their elevator pitch with real-time feedback from CFG career advisors. Interactive self-help kiosks, created in partnership with e2i, offered a gamified way for students to practise articulating their value proposition. Lifelong Learning SG guided students in discovering more about themselves and exploring career pathways that they might not have considered, while the NUS Lifelong Learning team advised soon-to-be alumni on courses to support professional and personal growth. To round off the experience at the Career Fest, students could also get a complimentary professional headshot.
Benedicta Edlyn Kurniawan, a Year 4 student from Faculty of Science, said: “Attending NUS Career Fest has really broadened my perspective. Many of the companies here are from industries I’ve never explored before. The fair made it so much easier to find out which companies are hiring and ask questions directly. I ended up discovering organisations with roles that genuinely interest me. I could even scan their QR codes on the spot to apply, which made the whole experience simple and productive.”
Sustaining the journey beyond Career Fest
Preparation for NUS Career Fest began weeks earlier, with workshops helping students refine resumes, strengthen elevator pitches, and polish personal brand statements. For the first time, post-event workshops have been introduced to ensure that students maintain momentum beyond the fair. These cover practical topics such as navigating job offers, salary negotiations, and building resilience during job searches.
Students can also explore international career opportunities at the Global Career Fair, between 13 and 23 October 2025, which offers access to employers from Japan, Hong Kong, India, Thailand, Mainland China, the United Arab Emirates, and beyond.
This will lead into the next edition of NUS Career Fest, slated to take place from first quarter of 2026.By anchoring two career fairs across the academic year, CFG is reshaping career preparation into a sustained journey of growth, discovery, and opportunity.
Understanding how interactions between the central nervous system and the immune system contribute to problems of aging, including Alzheimer’s disease, Parkinson’s disease, arthritis, and more, can generate new leads for therapeutic development, speakers said at MIT’s symposium “The Neuro-Immune Axis and the Aging Brain” on Sept 18.“The past decade has brought rapid progress in our understanding of how adaptive and innate immune systems impact the pathogenesis of neurodegenerative disorders,” sa
Understanding how interactions between the central nervous system and the immune system contribute to problems of aging, including Alzheimer’s disease, Parkinson’s disease, arthritis, and more, can generate new leads for therapeutic development, speakers said at MIT’s symposium “The Neuro-Immune Axis and the Aging Brain” on Sept 18.
“The past decade has brought rapid progress in our understanding of how adaptive and innate immune systems impact the pathogenesis of neurodegenerative disorders,” said Picower Professor Li-Huei Tsai, director of The Picower Institute for Learning and Memory and MIT’s Aging Brain Initiative (ABI), in her introduction to the event, which more than 450 people registered to attend. “Together, today’s speakers will trace how the neuro-immune axis shapes brain health and disease … Their work converges on the promise of immunology-informed therapies to slow or prevent neurodegeneration and age-related cognitive decline.”
For instance, keynote speaker Michal Schwartz of the Weizmann Institute in Israel described her decades of pioneering work to understand the neuro-immune “ecosystem.” Immune cells, she said, help the brain heal, and support many of its functions, including its “plasticity,” the ability it has to adapt to and incorporate new information. But Schwartz’s lab also found that an immune signaling cascade can arise with aging that undermines cognitive function. She has leveraged that insight to investigate and develop corrective immunotherapies that improve the brain’s immune response to Alzheimer’s both by rejuvenating the brain’s microglia immune cells and bringing in the help of peripheral immune cells called macrophages. Schwartz has brought the potential therapy to market as the chief science officer of ImmunoBrain, a company testing it in a clinical trial.
In her presentation, Tsai noted recent work from her lab and that of computer science professor and fellow ABI member Manolis Kellis showing that many of the genes associated with Alzheimer’s disease are most strongly expressed in microglia, giving it an expression profile more similar to autoimmune disorders than to many psychiatric ones (where expression of disease-associated genes typically is highest in neurons). The study showed that microglia become “exhausted” over the course of disease progression, losing their cellular identity and becoming harmfully inflammatory.
“Genetic risk, epigenomic instability, and microglia exhaustion really play a central role in Alzheimer’s disease,” Tsai said, adding that her lab is now also looking into how immune T cells, recruited by microglia, may also contribute to Alzheimer’s disease progression.
The body and the brain
The neuro-immune “axis” connects not only the nervous and immune systems, but also extends between the whole body and the brain, with numerous implications for aging. Several speakers focused on the key conduit: the vagus nerve, which runs from the brain to the body’s major organs.
For instance, Sara Prescott, an investigator in the Picower Institute and an MIT assistant professor of biology, presented evidence her lab is amassing that the brain’s communication via vagus nerve terminals in the body’s airways is crucial for managing the body’s defense of respiratory tissues. Given that we inhale about 20,000 times a day, our airways are exposed to many environmental challenges, Prescott noted, and her lab and others are finding that the nervous system interacts directly with immune pathways to mount physiological responses. But vagal reflexes decline in aging, she noted, increasing susceptibility to infection, and so her lab is now working in mouse models to study airway-to-brain neurons throughout the lifespan to better understand how they change with aging.
In his talk, Caltech Professor Sarkis Mazmanian focused on work in his lab linking the gut microbiome to Parkinson’s disease (PD), for instance by promoting alpha-synuclein protein pathology and motor problems in mouse models. His lab hypothesizes that the microbiome can nucleate alpha-synuclein in the gut via a bacterial amyloid protein that may subsequently promote pathology in the brain, potentially via the vagus nerve. Based on its studies, the lab has developed two interventions. One is giving alpha-synuclein overexpressing mice a high-fiber diet to increase short-chain fatty acids in their gut, which actually modulates the activity of microglia in the brain. The high-fiber diet helps relieve motor dysfunction, corrects microglia activity, and reduces protein pathology, he showed. Another is a drug to disrupt the bacterial amyloid in the gut. It prevents alpha synuclein formation in the mouse brain and ameliorates PD-like symptoms. These results are pending publication.
Meanwhile, Kevin Tracey, professor at Hofstra University and Northwell Health, took listeners on a journey up and down the vagus nerve to the spleen, describing how impulses in the nerve regulate immune system emissions of signaling molecules, or “cytokines.” Too great a surge can become harmful, for instance causing the autoimmune disorder rheumatoid arthritis. Tracey described how a newly U.S. Food and Drug Administration-approved pill-sized neck implant to stimulate the vagus nerve helps patients with severe forms of the disease without suppressing their immune system.
The brain’s border
Other speakers discussed opportunities for understanding neuro-immune interactions in aging and disease at the “borders” where the brain’s and body’s immune system meet. These areas include the meninges that surround the brain, the choroid plexus (proximate to the ventricles, or open spaces, within the brain), and the interface between brain cells and the circulatory system.
For instance, taking a cue from studies showing that circadian disruptions are a risk factor for Alzheimer’s disease, Harvard Medical School Professor Beth Stevens of Boston Children’s Hospital described new research in her lab that examined how brain immune cells may function differently around the day-night cycle. The project, led by newly minted PhD Helena Barr, found that “border-associated macrophages” — long-lived immune cells residing in the brain’s borders — exhibited circadian rhythms in gene expression and function. Stevens described how these cells are tuned by the circadian clock to “eat” more during the rest phase, a process that may help remove material draining from the brain, including Alzheimer’s disease-associated peptides such as amyloid-beta. So, Stevens hypothesizes, circadian disruptions, for example due to aging or night-shift work, may contribute to disease onset by disrupting the delicate balance in immune-mediated “clean-up” of the brain and its borders.
Following Stevens at the podium, Washington University Professor Marco Colonna traced how various kinds of macrophages, including border macrophages and microglia, develop from the embryonic stage. He described the different gene-expression programs that guide their differentiation into one type or another. One gene he highlighted, for instance, is necessary for border macrophages along the brain’s vasculature to help regulate the waste-clearing cerebrospinal fluid (CSF) flow that Stevens also discussed. Knocking out the gene also impairs blood flow. Importantly, his lab has found that versions of the gene may be somewhat protective against Alzheimer’s, and that regulating expression of the gene could be a therapeutic strategy.
Colonna’s WashU colleague Jonathan Kipnis (a former student of Schwartz) also discussed macrophages that are associated with the particular border between brain tissue and the plumbing alongside the vasculature that carries CSF. The macrophages, his lab showed in 2022, actively govern the flow of CSF. He showed that removing the macrophages let Alzheimer’s proteins accumulate in mice. His lab is continuing to investigate ways in which these specific border macrophages may play roles in disease. He’s also looking in separate studies of how the skull’s brain marrow contributes to the population of immune cells in the brain and may play a role in neurodegeneration.
For all the talk of distant organs and the brain’s borders, neurons themselves were never far from the discussion. Harvard Medical School Professor Isaac Chiu gave them their direct due in a talk focusing on how they participate in their own immune defense, for instance by directly sensing pathogens and giving off inflammation signals upon cell death. He discussed a key molecule in that latter process, which is expressed among neurons all over the brain.
Whether they were looking within the brain, at its border, or throughout the body, speakers showed that age-related nervous system diseases are not only better understood but also possibly better treated by accounting not only for the nerve cells, but their immune system partners.
Audience members packed Singleton Auditorium (and the overflow seating) in MIT’s Building 46 for the Sept. 18 symposium, “The Neuro-Immune Axis and the Aging Brain.”
Understanding how interactions between the central nervous system and the immune system contribute to problems of aging, including Alzheimer’s disease, Parkinson’s disease, arthritis, and more, can generate new leads for therapeutic development, speakers said at MIT’s symposium “The Neuro-Immune Axis and the Aging Brain” on Sept 18.“The past decade has brought rapid progress in our understanding of how adaptive and innate immune systems impact the pathogenesis of neurodegenerative disorders,” sa
Understanding how interactions between the central nervous system and the immune system contribute to problems of aging, including Alzheimer’s disease, Parkinson’s disease, arthritis, and more, can generate new leads for therapeutic development, speakers said at MIT’s symposium “The Neuro-Immune Axis and the Aging Brain” on Sept 18.
“The past decade has brought rapid progress in our understanding of how adaptive and innate immune systems impact the pathogenesis of neurodegenerative disorders,” said Picower Professor Li-Huei Tsai, director of The Picower Institute for Learning and Memory and MIT’s Aging Brain Initiative (ABI), in her introduction to the event, which more than 450 people registered to attend. “Together, today’s speakers will trace how the neuro-immune axis shapes brain health and disease … Their work converges on the promise of immunology-informed therapies to slow or prevent neurodegeneration and age-related cognitive decline.”
For instance, keynote speaker Michal Schwartz of the Weizmann Institute in Israel described her decades of pioneering work to understand the neuro-immune “ecosystem.” Immune cells, she said, help the brain heal, and support many of its functions, including its “plasticity,” the ability it has to adapt to and incorporate new information. But Schwartz’s lab also found that an immune signaling cascade can arise with aging that undermines cognitive function. She has leveraged that insight to investigate and develop corrective immunotherapies that improve the brain’s immune response to Alzheimer’s both by rejuvenating the brain’s microglia immune cells and bringing in the help of peripheral immune cells called macrophages. Schwartz has brought the potential therapy to market as the chief science officer of ImmunoBrain, a company testing it in a clinical trial.
In her presentation, Tsai noted recent work from her lab and that of computer science professor and fellow ABI member Manolis Kellis showing that many of the genes associated with Alzheimer’s disease are most strongly expressed in microglia, giving it an expression profile more similar to autoimmune disorders than to many psychiatric ones (where expression of disease-associated genes typically is highest in neurons). The study showed that microglia become “exhausted” over the course of disease progression, losing their cellular identity and becoming harmfully inflammatory.
“Genetic risk, epigenomic instability, and microglia exhaustion really play a central role in Alzheimer’s disease,” Tsai said, adding that her lab is now also looking into how immune T cells, recruited by microglia, may also contribute to Alzheimer’s disease progression.
The body and the brain
The neuro-immune “axis” connects not only the nervous and immune systems, but also extends between the whole body and the brain, with numerous implications for aging. Several speakers focused on the key conduit: the vagus nerve, which runs from the brain to the body’s major organs.
For instance, Sara Prescott, an investigator in the Picower Institute and an MIT assistant professor of biology, presented evidence her lab is amassing that the brain’s communication via vagus nerve terminals in the body’s airways is crucial for managing the body’s defense of respiratory tissues. Given that we inhale about 20,000 times a day, our airways are exposed to many environmental challenges, Prescott noted, and her lab and others are finding that the nervous system interacts directly with immune pathways to mount physiological responses. But vagal reflexes decline in aging, she noted, increasing susceptibility to infection, and so her lab is now working in mouse models to study airway-to-brain neurons throughout the lifespan to better understand how they change with aging.
In his talk, Caltech Professor Sarkis Mazmanian focused on work in his lab linking the gut microbiome to Parkinson’s disease (PD), for instance by promoting alpha-synuclein protein pathology and motor problems in mouse models. His lab hypothesizes that the microbiome can nucleate alpha-synuclein in the gut via a bacterial amyloid protein that may subsequently promote pathology in the brain, potentially via the vagus nerve. Based on its studies, the lab has developed two interventions. One is giving alpha-synuclein overexpressing mice a high-fiber diet to increase short-chain fatty acids in their gut, which actually modulates the activity of microglia in the brain. The high-fiber diet helps relieve motor dysfunction, corrects microglia activity, and reduces protein pathology, he showed. Another is a drug to disrupt the bacterial amyloid in the gut. It prevents alpha synuclein formation in the mouse brain and ameliorates PD-like symptoms. These results are pending publication.
Meanwhile, Kevin Tracey, professor at Hofstra University and Northwell Health, took listeners on a journey up and down the vagus nerve to the spleen, describing how impulses in the nerve regulate immune system emissions of signaling molecules, or “cytokines.” Too great a surge can become harmful, for instance causing the autoimmune disorder rheumatoid arthritis. Tracey described how a newly U.S. Food and Drug Administration-approved pill-sized neck implant to stimulate the vagus nerve helps patients with severe forms of the disease without suppressing their immune system.
The brain’s border
Other speakers discussed opportunities for understanding neuro-immune interactions in aging and disease at the “borders” where the brain’s and body’s immune system meet. These areas include the meninges that surround the brain, the choroid plexus (proximate to the ventricles, or open spaces, within the brain), and the interface between brain cells and the circulatory system.
For instance, taking a cue from studies showing that circadian disruptions are a risk factor for Alzheimer’s disease, Harvard Medical School Professor Beth Stevens of Boston Children’s Hospital described new research in her lab that examined how brain immune cells may function differently around the day-night cycle. The project, led by newly minted PhD Helena Barr, found that “border-associated macrophages” — long-lived immune cells residing in the brain’s borders — exhibited circadian rhythms in gene expression and function. Stevens described how these cells are tuned by the circadian clock to “eat” more during the rest phase, a process that may help remove material draining from the brain, including Alzheimer’s disease-associated peptides such as amyloid-beta. So, Stevens hypothesizes, circadian disruptions, for example due to aging or night-shift work, may contribute to disease onset by disrupting the delicate balance in immune-mediated “clean-up” of the brain and its borders.
Following Stevens at the podium, Washington University Professor Marco Colonna traced how various kinds of macrophages, including border macrophages and microglia, develop from the embryonic stage. He described the different gene-expression programs that guide their differentiation into one type or another. One gene he highlighted, for instance, is necessary for border macrophages along the brain’s vasculature to help regulate the waste-clearing cerebrospinal fluid (CSF) flow that Stevens also discussed. Knocking out the gene also impairs blood flow. Importantly, his lab has found that versions of the gene may be somewhat protective against Alzheimer’s, and that regulating expression of the gene could be a therapeutic strategy.
Colonna’s WashU colleague Jonathan Kipnis (a former student of Schwartz) also discussed macrophages that are associated with the particular border between brain tissue and the plumbing alongside the vasculature that carries CSF. The macrophages, his lab showed in 2022, actively govern the flow of CSF. He showed that removing the macrophages let Alzheimer’s proteins accumulate in mice. His lab is continuing to investigate ways in which these specific border macrophages may play roles in disease. He’s also looking in separate studies of how the skull’s brain marrow contributes to the population of immune cells in the brain and may play a role in neurodegeneration.
For all the talk of distant organs and the brain’s borders, neurons themselves were never far from the discussion. Harvard Medical School Professor Isaac Chiu gave them their direct due in a talk focusing on how they participate in their own immune defense, for instance by directly sensing pathogens and giving off inflammation signals upon cell death. He discussed a key molecule in that latter process, which is expressed among neurons all over the brain.
Whether they were looking within the brain, at its border, or throughout the body, speakers showed that age-related nervous system diseases are not only better understood but also possibly better treated by accounting not only for the nerve cells, but their immune system partners.
Audience members packed Singleton Auditorium (and the overflow seating) in MIT’s Building 46 for the Sept. 18 symposium, “The Neuro-Immune Axis and the Aging Brain.”
The MIT Schwarzman College of Computing and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) recently celebrated the launch of the MIT–MBZUAI Collaborative Research Program, a new effort to strengthen the building blocks of artificial intelligence and accelerate its use in pressing scientific and societal challenges.Under the five-year agreement, faculty, students, and research staff from both institutions will collaborate on fundamental research projects to advance the techn
The MIT Schwarzman College of Computing and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) recently celebrated the launch of the MIT–MBZUAI Collaborative Research Program, a new effort to strengthen the building blocks of artificial intelligence and accelerate its use in pressing scientific and societal challenges.
Under the five-year agreement, faculty, students, and research staff from both institutions will collaborate on fundamental research projects to advance the technological foundations of AI and its applications in three core areas: scientific discovery, human thriving, and the health of the planet.
“Artificial intelligence is transforming nearly every aspect of human endeavor. MIT’s leadership in AI is greatly enriched through collaborations with leading academic institutions in the U.S. and around the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our collaboration with MBZUAI reflects a shared commitment to advancing AI in ways that are responsible, inclusive, and globally impactful. Together, we can explore new horizons in AI and bring broad benefits to society.”
“This agreement will unite the efforts of researchers at two world-class institutions to advance frontier AI research across scientific discovery, human thriving, and the health of the planet. By combining MBZUAI’s focus on foundational models and real-world deployment with MIT’s depth in computing and interdisciplinary innovation, we are creating a transcontinental bridge for discovery. Together, we will not only expand the boundaries of AI science, but also ensure that these breakthroughs are pursued responsibly and applied where they matter most — improving human health, enabling intelligent robotics, and driving sustainable AI at scale,” says Eric Xing, president and university professor at MBZUAI.
Each institution has appointed an academic director to oversee the program on its campus. At MIT, Philip Isola, the Class of 1948 Career Development Professor in the Department of Electrical Engineering and Computer Science, will serve as program lead. At MBZUAI, Le Song, professor of machine learning, will take on the role.
Supported by MBZUAI — the first university dedicated entirely to advancing science through AI, and based in Abu Dhabi, U.A.E. — the collaboration will fund a number of joint research projects per year. The findings will be openly publishable, and each project will be led by a principal investigator from MIT and one from MBZUAI, with project selections made by a steering committee composed of representatives from both institutions.
Dan Huttenlocher (left), dean of the MIT Schwarzman College of Computing; Eric Xing (standing), president of MBZUAI; and Sami Haddadin, VP of research at MBZUAI, held a signing ceremony officially launching the MIT-MBZUAI Collaborative Research Program. The new international collaboration will unite faculty and students from both institutions to advance AI and accelerate its use in pressing scientific and societal challenges.
The MIT Schwarzman College of Computing and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) recently celebrated the launch of the MIT–MBZUAI Collaborative Research Program, a new effort to strengthen the building blocks of artificial intelligence and accelerate its use in pressing scientific and societal challenges.Under the five-year agreement, faculty, students, and research staff from both institutions will collaborate on fundamental research projects to advance the techn
The MIT Schwarzman College of Computing and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) recently celebrated the launch of the MIT–MBZUAI Collaborative Research Program, a new effort to strengthen the building blocks of artificial intelligence and accelerate its use in pressing scientific and societal challenges.
Under the five-year agreement, faculty, students, and research staff from both institutions will collaborate on fundamental research projects to advance the technological foundations of AI and its applications in three core areas: scientific discovery, human thriving, and the health of the planet.
“Artificial intelligence is transforming nearly every aspect of human endeavor. MIT’s leadership in AI is greatly enriched through collaborations with leading academic institutions in the U.S. and around the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our collaboration with MBZUAI reflects a shared commitment to advancing AI in ways that are responsible, inclusive, and globally impactful. Together, we can explore new horizons in AI and bring broad benefits to society.”
“This agreement will unite the efforts of researchers at two world-class institutions to advance frontier AI research across scientific discovery, human thriving, and the health of the planet. By combining MBZUAI’s focus on foundational models and real-world deployment with MIT’s depth in computing and interdisciplinary innovation, we are creating a transcontinental bridge for discovery. Together, we will not only expand the boundaries of AI science, but also ensure that these breakthroughs are pursued responsibly and applied where they matter most — improving human health, enabling intelligent robotics, and driving sustainable AI at scale,” says Eric Xing, president and university professor at MBZUAI.
Each institution has appointed an academic director to oversee the program on its campus. At MIT, Philip Isola, the Class of 1948 Career Development Professor in the Department of Electrical Engineering and Computer Science, will serve as program lead. At MBZUAI, Le Song, professor of machine learning, will take on the role.
Supported by MBZUAI — the first university dedicated entirely to advancing science through AI, and based in Abu Dhabi, U.A.E. — the collaboration will fund a number of joint research projects per year. The findings will be openly publishable, and each project will be led by a principal investigator from MIT and one from MBZUAI, with project selections made by a steering committee composed of representatives from both institutions.
Dan Huttenlocher (left), dean of the MIT Schwarzman College of Computing; Eric Xing (standing), president of MBZUAI; and Sami Haddadin, VP of research at MBZUAI, held a signing ceremony officially launching the MIT-MBZUAI Collaborative Research Program. The new international collaboration will unite faculty and students from both institutions to advance AI and accelerate its use in pressing scientific and societal challenges.
MIT associate professor of physics Riccardo Comin has been selected as 2025 Experimental Physics Investigator by the Gordon and Betty Moore Foundation. Two MIT physics alumni — Gyu-Boong Jo PhD ’10 of Rice University, and Ben Jones PhD ’15 of the University of Texas at Arlington — were also among this year’s cohort of 22 honorees.The prestigious Experimental Physics Investigators (EPI) Initiative recognizes mid-career scientists advancing the frontiers of experimental physics. Each award provide
The prestigious Experimental Physics Investigators (EPI) Initiative recognizes mid-career scientists advancing the frontiers of experimental physics. Each award provides $1.3 million over five years to accelerate breakthroughs and strengthen the experimental physics community.
At MIT, Comin investigates magnetoelectric multiferroics by engineering interfaces between two-dimensional materials and three-dimensional oxide thin films. His research aims to overcome long-standing limitations in spin-charge coupling by moving beyond epitaxial constraints, enabling new interfacial phases and coupling mechanisms. In these systems, Comin’s team explores the coexistence and proximity of magnetic and ferroelectric order, with a focus on achieving strong magnetoelectric coupling. This approach opens new pathways for designing tunable multiferroic systems unconstrained by traditional synthesis methods.
Comin’s research expands the frontier of multiferroics by demonstrating stacking-controlled magnetoelectric coupling at 2D–3D interfaces. This approach enables exploration of fundamental physics in a versatile materials platform and opens new possibilities for spintronics, sensing, and data storage. By removing constraints of epitaxial growth, Comin’s work lays the foundation for microelectronic and spintronic devices with novel functionalities driven by interfacial control of spin and polarization.
Comin’s project, Interfacial MAGnetoElectrics (I-MAGinE), aims to study a new class of artificial magnetoelectric multiferroics at the interfaces between ferroic materials from 2D van der Waals systems and 3D oxide thin films. The team aims to identify and understand novel magnetoelectric effects to demonstrate the viability of stacking-controlled interfacial magnetoelectric coupling. This research could lead to significant contributions in multiferroics, and could pave the way for innovative, energy-efficient storage devices.
“This research has the potential to make significant contributions to the field of multiferroics by demonstrating the viability of stacking-controlled interfacial magnetoelectric coupling,” according to Comin’s proposal. “The findings could pave the way for future applications in spintronics, data storage, and sensing. It offers a significant opportunity to explore fundamental physics questions in a novel materials platform, while laying the ground for future technological applications, including microelectronic and spintronic devices with new functionalities.”
Comin’s group has extensive experience in researching 2D and 3D ferroic materials and electronically ordered oxide thin films, as well as ultrathin van der Waals magnets, ferroelectrics, and multiferroics. Their lab is equipped with state-of-the-art tools for material synthesis, including bulk crystal growth of van der Waals materials and pulsed laser deposition targets, along with comprehensive fabrication and characterization capabilities. Their expertise in magneto-optical probes and advanced magnetic X-ray techniques promises to enable in-depth studies of electronic and magnetic structures, specifically spin-charge coupling, in order to contribute significantly to understanding spin-charge coupling in magnetochiral materials.
The coexistence of ferroelectricity and ferromagnetism in a single material, known as multiferroicity, is rare, and strong spin-charge coupling is even rarer due to fundamental chemical and electronic structure incompatibilities.
The few known bulk multiferroics with strong magnetoelectric coupling generally rely on inversion symmetry-breaking spin arrangements, which only emerge at low temperatures, limiting practical applications. While interfacial magnetoelectric multiferroics offer an alternative, achieving efficient spin-charge coupling often requires stringent conditions like epitaxial growth and lattice matching, which limit material combinations. This research proposes to overcome these limitations by using non-epitaxial interfaces of 2D van der Waals materials and 3D oxide thin films.
Unique features of this approach include leveraging the versatility of 2D ferroics for seamless transfer onto any substrate, eliminating lattice matching requirements, and exploring new classes of interfacial magnetoelectric effects unconstrained by traditional thin-film synthesis limitations.
Launched in 2018, the Moore Foundation’s EPI Initiative cultivates collaborative research environments and provides research support to promote the discovery of new ideas and emphasize community building.
“We have seen numerous new connections form and new research directions pursued by both individuals and groups based on conversations at these gatherings,” says Catherine Mader, program officer for the initiative.
The Gordon and Betty Moore Foundation was established to create positive outcomes for future generations. In pursuit of that vision, it advances scientific discovery, environmental conservation, and the special character of the San Francisco Bay Area.
Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.Now, researchers at MIT have come up with a clever way of combining two different m
Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.
Now, researchers at MIT have come up with a clever way of combining two different methods of producing the compound that minimizes waste products, that, when combined with some other simple upgrades, could reduce the greenhouse emissions from production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.
The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two others.
“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering. “It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.
Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.
To address this, two newer variations of ammonia production have been developed: so-called “blue ammonia,” where the greenhouse gases are captured right at the factory and then sequestered deep underground, and “green ammonia,” produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.
Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.
But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.
The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demands for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.
“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says. “People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.” So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.
The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.
“Putting them next to each other turns out to have significant economic value,” Green says. This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.
“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.” But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.
“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got,” he says. This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.
Although the team did a detailed study of both the technology and the economics that show the system has great promise, Green points out that “no one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process. “I would say there’s plenty of additional work to do to make it a real industry.” But the results of this study, which shows the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourages the possibility of people making the big investments that would be needed to really make this industry feasible.”
This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research. “The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”
He adds that, “given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”
The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang. The work was supported by IHI Japan through the MIT Energy Initiative and the Martin Family Society of Fellows for Sustainability.
MIT researchers have proposed an approach for combined blue-green ammonia production that minimizes waste products and, when combined with some other simple upgrades, could reduce the greenhouse emissions from ammonia production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.
Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.Now, researchers at MIT have come up with a clever way of combining two different m
Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.
Now, researchers at MIT have come up with a clever way of combining two different methods of producing the compound that minimizes waste products, that, when combined with some other simple upgrades, could reduce the greenhouse emissions from production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.
The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two others.
“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering. “It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.
Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.
To address this, two newer variations of ammonia production have been developed: so-called “blue ammonia,” where the greenhouse gases are captured right at the factory and then sequestered deep underground, and “green ammonia,” produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.
Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.
But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.
The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demands for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.
“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says. “People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.” So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.
The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.
“Putting them next to each other turns out to have significant economic value,” Green says. This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.
“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.” But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.
“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got,” he says. This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.
Although the team did a detailed study of both the technology and the economics that show the system has great promise, Green points out that “no one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process. “I would say there’s plenty of additional work to do to make it a real industry.” But the results of this study, which shows the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourages the possibility of people making the big investments that would be needed to really make this industry feasible.”
This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research. “The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”
He adds that, “given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”
The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang. The work was supported by IHI Japan through the MIT Energy Initiative and the Martin Family Society of Fellows for Sustainability.
MIT researchers have proposed an approach for combined blue-green ammonia production that minimizes waste products and, when combined with some other simple upgrades, could reduce the greenhouse emissions from ammonia production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.
Health
‘Harvard Thinking’: Cancer is rising among younger people — why?
Illustrations by Liz Zonarich/Harvard Staff
Samantha Laine Perfas
Harvard Staff Writer
October 8, 2025
long read
In podcast, experts outline potential factors driving trend and how to reduce risk
Cancer rates in recent decades have been declining. Yet from 2010 to 2019, the incidence of 14 cancer types among people
‘Harvard Thinking’: Cancer is rising among younger people — why?
Illustrations by Liz Zonarich/Harvard Staff
Samantha Laine Perfas
Harvard Staff Writer
long read
In podcast, experts outline potential factors driving trend and how to reduce risk
Cancer rates in recent decades have been declining. Yet from 2010 to 2019, the incidence of 14 cancer types among people under the age of 50 increased.
“Somebody who is born in 1990 now has quadruple the risk of developing rectal cancer and over double the risk of developing colon cancer compared to a similarly aged person who was born in 1950,” said Kimmie Ng, an associate professor of medicine at Harvard Medical School and the founding director of the Young-Onset Colorectal Cancer Center at the Dana-Farber Cancer Institute.
This is a global trend, according to Timothy Rebbeck, the Vincent L. Gregory Jr. Professor of Cancer Prevention at the Harvard T.H. Chan School of Public Health. It’s happening in both men and women, leading researchers to believe that the factors causing the increases must be widespread.
“The last time we saw this kind of phenomenon on a global scale and with such changes was lung cancer in the mid-20th century,” Rebbeck said. “But we figured that out pretty quickly; that was cigarette smoking.”
To identify what’s driving the rise in early-onset cancer rates, researchers are looking at the role of lifestyle, environmental changes, and potential genetic variations — a recent study even turns to the microbiome. Tomotaka Ugai, a cancer epidemiologist at the Chan School, said that pursuing a healthy lifestyle still goes a long way in reducing not just the risk of cancer but a variety of health issues.
“[Researchers] can speak up more, but also we can collaborate with industries or policymakers to increase awareness of early-onset cancers,” Ugai said.
In this episode of “Harvard Thinking,” host Samantha Laine Perfas talks with Ng, Rebbeck, and Ugai about what’s known about early-onset cancer — and how individuals can mitigate risk.
Kimmie Ng: Somebody who is born in 1990 now has quadruple the risk of developing rectal cancer and over double the risk of developing colon cancer compared to a similarly aged person who was born in 1950.
Samantha Laine Perfas: Contrary to overall cancer trends, there’s been an increase in certain cancer diagnoses in people under 50. From 2010 through 2019, the incidence of 14 cancer types increased among people in this demographic. The big question is, why? Does it have to do with lifestyle choices? Are there environmental factors at play? What can be done to mitigate risk?
Welcome to “Harvard Thinking,” a podcast where the life of the mind meets everyday life. Today I’m joined by:
Timothy Rebbeck: Tim Rebbeck. I’m the Vincent Gregory Professor of Cancer Prevention at the Harvard Chan School and the Dana-Farber Cancer Institute.
Laine Perfas: He’s a cancer epidemiologist and studies global cancer trends and disparities. Then:
Ng: Kimmie Ng. I’m an associate professor of medicine at Harvard Medical School.
Laine Perfas: She’s also a medical oncologist at Dana-Farber Cancer Institute and the founding director of the Young-Onset Colorectal Cancer Center. And our final guest:
Tomotaka Ugai: My name is Tomotaka Ugai, I’m a cancer epidemiologist at Harvard T.H. Chan School of Public Health.
Laine Perfas: Tomo is also an instructor at Brigham and Women’s Hospital and a founder of the International Cancer Spectrum Consortium.
And I’m Samantha Laine Perfas, your host and a writer for The Harvard Gazette. Today we’ll look at early-onset cancer and how younger people can navigate their increased risk.
I think it’s important to start with the context of cancer overall, which is that rates have been declining in recent years. However, some cancers are on the rise, specifically in people under 50. What are we seeing?
Ugai: I think when we talk about increase in early-onset cancer, this is not a simple story. The incidence of early-onset cancer has been increasing in many parts of the world, but this is different between cancer types, regions, and countries. So we need to know more about such differences. Our recent analysis shows that many early-onset cancer types, including colorectal cancer, breast cancer, uterine cancer, kidney cancer, pancreatic cancer, and multiple myeloma have increased more rapidly compared to late-onset cancer types. Also for colorectal cancer and uterine cancer, both the incidence and mortality have increased concurrently. And this phenomenon is mainly observed in high socioeconomic countries, including the United States, the U.K., Australia, and New Zealand.
“Our recent analysis shows that many early-onset cancer types, including colorectal cancer, breast cancer, uterine cancer, kidney cancer, pancreatic cancer, and multiple myeloma have increased more rapidly compared to late-onset cancer types.”
Rebbeck: If I could just add to what Tomo said, I think one of the very interesting observations that he raised is that what we’ve been observing over the last couple of decades is increases in cancers diagnosed under the age of 50, at many different tumor sites, around the world, in men and women. And it’s a phenomenon that we’ve barely ever seen in the past. The last time we saw this kind of phenomenon on a global scale and with such changes was lung cancer in the mid-20th century, when it started rising from almost a rare cancer to the most common cancer. But we figured that out pretty quickly; that was cigarette smoking. In this case, we’re talking about probably major exposures or something like that, but we’re also talking about many cancers all over the world. And so there’s something really critical and interesting going on here.
Ng: And what’s interesting is if you look closely at the epidemiologic trends, they follow what we call a birth cohort effect, where the increase is really varying by generation. To give you an example for colorectal cancer, somebody who is born in 1990 now has quadruple the risk of developing rectal cancer and over double the risk of developing colon cancer compared to a similarly aged person who was born in 1950. And this is important because it gives us clues as to what might be underlying the rising trends And what a birth cohort effect usually suggests is that it’s a combination of some environmental exposures that are affecting the incidence by generation.
Laine Perfas: Kimmie, could you tell me a little bit more about the birth cohort effect, and also what’s it been like treating patients who are now developing these cancers at a much younger age?
Ng: If you look at the trends, the rise has been happening in every birth cohort basically since 1950. But the trends have been formally documented in published literature since probably the mid-1990s. So for colorectal cancer, for example, we have been seeing about a two percent per year rise in the rates of colorectal cancer in both men and women since the mid-1990s, and it is estimated that by the year 2030, colorectal cancer will be the leading cause of cancer-related death in people under the age of 50.
It is well-known that the challenges faced by younger people diagnosed with cancer are very different than the challenges faced by older people. And that was partly the impetus for us starting our dedicated Young-Onset Colorectal Cancer Center so that we can better address these unique issues that affect young people and that ranges from issues about fertility — many of these people are still trying to expand their families or start their families. It extends to sexual health. It extends to career and education disruptions, and over 80 percent of young patients with colorectal cancer have children under the age of 18 when they’re often diagnosed with an advanced stage of disease. Many are also in the sandwich generation when they’re taking care of elderly parents as well. It’s just such a difficult time to be hit with a terminal cancer diagnosis. And so there are high levels of psychosocial distress. Many need social work support and psychiatric support, so we are trying to provide them with all of that through our center.
Laine Perfas: It’s interesting that the last time a phenomenon like this happened it was with lung cancer. It was then directly linked to smoking cigarettes. Do we have any sense of what might be contributing to the current trends?
Rebbeck: We certainly have many hypotheses that make sense. Many of those of course, include diet, lifestyle, obesity, alcohol, and tobacco use. So major exposures, so they would have to be fairly common exposures in order to see the rate changes that we’re observing. They would have to be fairly general carcinogen exposures, meaning they would have to be influencing cancers at multiple sites. Because that’s what we’re seeing. They would have to be acting in men and women since that’s what we’re seeing in the epidemiological data. And they would have to be things that have probably been changing over the past decades worldwide. And so you can imagine what some of those are. I’m sure we’ll hear more from Tomo and Kimmie about this and some of the work they’ve done. But obesity fits that pattern very well. Obesity is something that has increased in recent decades substantially. it’s changed across the world. It’s happening in men and women, particularly it’s happening in children. If to the degree that obesity is a leading explanation for these changes, it’s probably happening earlier in life and children. And the lag that we’re seeing between changes in the exposure and the advent of the earlier-onset cancers is probably happening in a lag that started earlier in life, obviously early-onset cancers. And so all of those pieces fit, I’m guessing it’s not the only explanation. And there are many other hypotheses out there that you can guess, microplastics, or like you can begin to think about all the things that might be going on that have changed in recent decades.
Ugai: I think there are several important clues for potential causes for the increase in early-onset cancers. First, as Kimmie mentioned, there is a birth cohort effect, which means that more recent generations have a higher risk of early-onset cancers; this effect is linked to the change in environmental factors or lifestyle factors for many years. For example, many lifestyle factors such as obesity, physical inactivity, diet, and some environmental factors such as air pollution have changed since 1940 to 1950, which may be a very important factor.
“There are several important clues for potential causes for increase in early-onset cancers. … Many lifestyle factors such as obesity, physical inactivity, diet, and some environmental factors such as air pollution have changed since 1940 to 1950.”
Second, as I mentioned, several early-onset cancer types have increased more rapidly compared to later-onset cancers. This suggests that certain exposures, such as new risk factors or established risk factors, have shifted toward younger populations. For example, the prevalence of obesity has increased among younger populations but also pollutions or microenvironments or some other toxins can be considered potential new risk factors for early-onset cancers.
Ng: Just to follow up on this discussion about obesity, I agree it has been posited as the leading hypothesis for why early-onset cancers have been rising globally. And indeed, if you look at the cancer types that have been increasing in young people, they are all known to be associated with obesity, including uterine cancer and cancers of the digestive system, which don’t only include colorectal cancer, but also pancreatic cancer, biliary tract cancer, appendix cancers, and so many different others. However, I can tell you that in our clinics here at Dana-Farber, the patients we’re seeing for the most part are not obese and they live healthy and active lifestyles. They eat very healthily. So I do think while obesity is certainly a contributor to the rising trends, it is probably not the only answer.
“In our clinics here at Dana-Farber, the patients we’re seeing for the most part are not obese and they live healthy and active lifestyles. … So I do think while obesity is certainly a contributor to the rising trends, it is probably not the only answer.”
Laine Perfas: There are actually people in my life who are very young and active and healthy who are shocked to find out that they have cancer and they’re not always easily treatable, some of them are very aggressive. Thinking about the trends, it’s hard not to be like, I was born in the ’90s, is that just a reality that my generation is facing, that my rates are going to be four times higher than someone born in the 1950s regardless of my choices? Do we have more agency than that?
Ng: I will say that following a healthy diet and lifestyle and maintaining a normal body weight is still so critically important. And I think Tim was mentioning these factors and behaviors in early life are what we think are the important time window of exposure that leads to increased susceptibility to these cancers in young adulthood.
So I do still think it is really important for public health agencies and the health system in general to educate children, adolescents on the importance of a healthy diet and lifestyle and on maintaining a normal body weight. Because those things will likely not only protect you against developing multiple different cancers at whatever age, but also against a host of other chronic diseases.
Rebbeck: The other point I’d add to that is, depending on how you hear this message as a person born in the ’80s, ’90s, I could imagine people panicking about that. And I think it’s important to keep in perspective that most of these cancers are still predominantly diagnosed in older individuals over the age of 50. It’s not like individuals under the age of 50, age 40 are now the main people diagnosed with these cancers. That’s not the case. It’s certainly true that we have a much, much higher risk of cancer now than we did earlier if you’re under the age of 50. But it is still relatively rare.
Ugai: I just want to follow up with Kimmie’s very important point about early-life exposures. Evidence indicates that early-life healthy diet is associated with reduced risk of early-onset colorectal cancer. So if you’re parents, you can start healthy diets or healthy lifestyle as soon as possible. At the same time, you can teach such healthy lifestyles to children so that children can have reduced risk.
Laine Perfas: We can adopt healthier habits, but I also want to talk about genetics, something we can’t change. What role do genetics play?
Rebbeck: It’s well-known that individuals who have an inherited predisposition to cancer tend to be diagnosed at a younger age. So individuals who are diagnosed with hereditary breast cancer because they’ve inherited a BRCA1 or BRCA2 mutation, the average age of breast cancer diagnosis, for example, is 10 years younger than the general population. Genetics, and particularly these high-penetrance hereditary patterns of cancer, are certainly associated with the early ages of onset. But what we don’t see or don’t anticipate is that changes in the germline genetic pattern that create these very high risks have changed substantially over the last decades. We don’t expect that germline genetics, frequencies, mutation types or whatever have changed so much that it would explain the majority of these early-onset diagnosis differences that we’ve observed. Having said that, cancer is a genetic disease. There’s always underlying susceptibility to cancer. And it’s possible and perhaps even likely that there are gene-environment interactions that people who have an underlying susceptibility and now are being exposed to whatever the major factors are, that they’re becoming penetrant. They’re becoming diagnosed earlier and earlier because of those interactions between genes and environments.
Ng: This is such an important topic because, I completely agree, if you look at gastrointestinal cancers and those that are happening in people under the age of 50, probably up to a quarter or so are found to have a hereditary reason for having developed that cancer at a young age. But that leaves 75 percent having sporadic cancers not related to family history or a hereditary predisposition, but because you are much more likely to identify a hereditary condition the younger you are diagnosed, it is important that the standard of care includes hereditary genetic testing for any young person under the age of 50 who is diagnosed with cancer.
Laine Perfas: I’m curious to hear what you all think about the lowering of screening ages for various cancers. For example, colorectal cancer was lowered from 50 to 45. Breast cancer screening has actually fluctuated multiple times. What are the pros and cons of screening earlier?
Ugai: As you said, in 2018, the American Cancer Society recommended initiating colorectal cancer screening at the age of 45 instead of age 50 in the average-risk populations. I personally think that this approach would work. But at the same time, we need to think more about cost-effectiveness, invasiveness, and potential complications. Yeah, this is a little bit difficult to decide.
Rebbeck: To Tomo’s point also because cancers are rarer in earlier ages, lowering the age of screening is inherently less efficient, if you will. We’ll detect fewer cancers if we screen the same number of people because they’re just rarer. And so the notion of changing cancer screening ages for those that we can screen in this situation, colorectal, breast, for example, the approaches probably pay off a little bit less. What are the risks and trade-offs and cost benefits? And I think that’s really an important consideration for our public health.
Ng: I do think that lowering the screening age for average-risk individuals for colorectal cancer down to 45 is a good first step in the right direction. The majority of young-onset colorectal cancers are diagnosed in people in their 40s. However, going back again to those epidemiologic trends, the rates of rise are actually steepest in the very youngest patients who are below the eligible age of screening. And so clearly lowering the age or basing screening recommendations on chronological age alone is going to be insufficient for addressing this problem of early-onset cancers. And what I think this means is that it really highlights the importance of doing the research to better understand, what exactly are the risk factors? Who is at risk? What are the causes? And then, can we identify the young people who are at higher risk of colorectal cancer and target them for earlier screening?
Rebbeck: I think that’s really important, because we’ve seen that population-based screening has value in many situations, but risk-adaptive screening approaches are becoming more and more relevant and appropriate, and particularly in this situation. So for example, not only in breast cancer do we think about different ways of screening, like we would use MRI in very young women, not mammography, for example, but the timing of those, the cadence of that screening. So as we start talking about more unusual individuals, because of their risk, and in a rarer situation like colorectal cancer in the 30s, a population-wide screening, it becomes less and less compelling, and a targeted screening kind of approach or targeted early detection is probably what we need to be thinking more and more about.
Laine Perfas: Have screening changes made a difference or is it still too early to tell?
Ng: There actually was just a recent paper published in JAMA this month that did show that the uptake of colorectal cancer screening in people between the ages of 45 and 49 has been slowly picking up since the United States Preventive Services Task Force issued their revised guidelines to lower the age. And actually, there does seem to be a promising shift toward detecting more early-stage cancers now because of the recent guideline change. So I think it is starting to work. It is still early days, but I do hope the uptake will continue.
Rebbeck: And I think one of the interesting observations is that most of the cancers that we’re talking about don’t have screening modalities. Colorectal cancer is clearly the 500-pound gorilla of this conversation, in part because it’s a common cancer, but it also has very clear, actionable things you can do, like colonoscopy. Most of these other cancers don’t. And I think that screening is critically important, but we can’t do that for pancreatic cancer or kidney cancer or whatever. And so there’s a lot of other issues that we need to think about beyond screening for most of these cancers.
Ugai: Also I would like to add one more important thing about early-onset cancer and screening. So the increase in early-onset cancer can be probably partially attributable to increasing screening and early detection. And also, advances in a cancer registry system or screening devices can also affect the increasing incidence of early-onset cancers. Again, it’s important to better understand what’s going on at the global scale.
Laine Perfas: Are you saying it’s possible that the increase in rates is partially due to simply an increase in screening, that the cancers may have been there before, we just didn’t know about them because we weren’t screening for them?
Ugai: Yes, that’s true, and for example, for thyroid cancer and prostate cancers, when we looked at the actual data, the incidence of early-onset prostate cancer and thyroid cancer has been increasing for the past few decades. But when we looked at both incidence and mortality, the mortality has not increased. So potentially, this increase might not be true and this is attributable to increasing screening.
Ng: I just want to point out though that is not true for colorectal cancer, right? The rise has been documented since the mid-1990s when screening age was 50, and most of the cases of young-onset colorectal cancer are late-stage cancer, Stage 3 or 4, both points of which really rule out this rise as being a screening effect.
Rebbeck: It’s very true. There’s a great example in South Korea a couple decades ago where they started screening for thyroid cancer and the rates skyrocketed. The mortality rates stayed exactly the same, because there’s a lot of thyroid cancer in the population that’s indolent and doesn’t cause any problems. Similarly, as Tomo was saying, with prostate cancer, lots of indolent prostate cancer. I’m not sure that’s the case with colorectal cancer, that there isn’t a lot of indolent colorectal cancer that just sits there for many decades and doesn’t progress. So I think each cancer is going to be different. Each cancer probably needs to be thought of in terms of screening and in terms of overdiagnosis, and the value of screening. They’re all going to be quite different.
Just as a note for prostate cancer, which I think is a great sort of canary in the coal mine for the kind of things that we anticipate happening in cancer screening. The U.S. Preventive Services Task Force changed its guidelines about prostate cancer screening with PSA over many years. And in the most recent change that happened about 2018, when they started slowing down that screening, in more recent periods — last few years — now the mortality in prostate cancer is starting to rise. That took a long time to have happen, and it’s not an early-onset cancer by any means, but I think that when we see these very broad changes in policy and guidelines, if the screening is making a difference, we will eventually see changes to mortality.
Ng: Yeah, it’s so important to consider the individual screening practices of each country as you compare global trends in early-onset cancer incidence. As an example, in Japan and South Korea, there is population-based and opportunistic screening for gastric cancer. And so the rates of gastric cancer have actually not been rising in young people or older people in those countries. It’s really important to take into consideration what different countries do when interpreting incidence trends.
Laine Perfas: It is pretty well-known that early detection is a key to better, more effective treatment. Beyond just screening, what barriers stand in the way to earlier detection?
Rebbeck: In colorectal cancer, of course, it’s the ability and willingness for people to have a colonoscopy. And Kimmie can probably talk a lot about this from her experience, but colonoscopies, while very effective, are things nobody likes. It’s hard to do. It’s icky. It’s not something that is an easy thing. And there are some clear barriers there. And I’d be interested to hear the others’ opinions about a tiered approach where we use fecal occult blood or FIT testing or something like that as an adjunct to a colonoscopy. Are there better approaches that might maximize the ability to detect cancers even among people who may be resistant to doing the gold standard colonoscopy?
Ng: On top of personal reasons why somebody would not want to get a colonoscopy, there are also other logistical barriers, right? Especially for people who have to work multiple jobs and cannot take time off from work to do the bowel prep and then find a ride to their colonoscopy and find a ride home from their colonoscopy. These are real challenges that many people face on a daily basis that really prevent them from being able to do screening. And so I think that is why it’s so important that the United States Preventive Services Task Force included a menu of different test options as ways to screen for colorectal cancer. Because a home-based stool test may be much easier for somebody to do than overcoming all those logistics to get to their colonoscopy.
Ugai: In addition to that, I just want to highlight the importance of increasing awareness of early-onset cancers. Maybe we can speak up more, but also we can collaborate with industries or policymakers to increase awareness of early-onset cancers.
Rebbeck: And I would add to that list clinicians, primary care clinicians, people who might be the first line in identifying people who might have symptoms of colorectal cancer but the patient is 30 years old and they don’t think about it, or they put it off as something else. So I think there’s a lot of awareness on lots of different levels to ensure we get people into the right care pathways at the earliest possible time.
Ng: I also think there is a stigma around certain cancers that prevents conversations about the diagnosis, about the symptoms. Patients are not comfortable bringing up symptoms related to their bowels to their primary care physicians or even raising them to their family members. And so I do think normalizing conversations around some of these issues will also go a long way in raising awareness.
Laine Perfas: There is still so much that we don’t know. What should we be researching or where should we be looking next for answers?
Rebbeck: The generic answer to that is to understand what the lifestyle, obesity, adiposity, environmental exposures are likely to be. And we have a lot of clues already from studies, but those studies are very difficult to do and they require very large sample sizes done appropriately. They may require prospective cohorts that may take years or decades to follow. And the gold standard of identifying these kinds of risk factors is something that we won’t have an answer for immediately.
Ng: This is such a challenging problem to study, and the life course studies are probably the best way to understand what’s happening in childhood that then changes you somehow to make you at increased risk of developing cancer in young adulthood. But those take too long, would be too costly. And we really can’t wait that long for answers, honestly. Because of how complex this phenomenon is, it really is going to take a multidisciplinary team. We need epidemiologists. We need oncologists, basic scientists, environmental health experts all working together to really try to understand what the underlying etiologies are.
Rebbeck: And there may be a great opportunity for a quick turnaround of basic science. Basic science can happen a lot more quickly than these large epidemiological studies. And if we really had a good sense of what the molecular etiology, the mechanism of early-onset cancers is, is it different than later-onset cancers? Are the molecular signatures in those tumors different?
Ng: To give an example, actually, there was a recent paper published in Nature that identified a mutational signature in DNA caused by a genotoxin called colibactin that seems to be a lot more common in younger people who develop colorectal cancer than older people who develop colorectal cancer. And it’s exciting because it’s the first evidence that the microbiome, because we think a bug called pks+ E. coli is producing this colibactin to damage DNA, may be the contributor. It’s not going to explain all of early-onset colorectal cancer, but it does implicate potentially the microbiome as a reason for why this might be happening worldwide.
Laine Perfas: What advice do you have to empower our listeners to better manage their health and mitigate their cancer risk?
Rebbeck: Awareness. Understand the symptoms of some of these cancers and ask questions. Become educated, and not only about colorectal cancer, but many of these early-onset cancers in individuals who are under the age of 50. And, as Kimmie said earlier, live a healthy lifestyle: Eat well, exercise, keep your weight down, all the things that we know from all of the other advice we’ve gotten around cancer and other diseases. Those are certainly things that are empowering. People can do those; none of them are easy, but people can do them to minimize their risks.
Ng: I would also say for the cancers that do have screening guidelines and programs, get screened, because that could be lifesaving. And something else that’s important to mention is to know your family history too, because if there is a family history of that cancer in your relatives, especially if it happened at a young age in those relatives, then you can qualify to get screened at an earlier age, and that as well can be lifesaving.
Ugai: The important thing is healthy lifestyle and healthy diet. The important fact is that many established cancer risk factors overlap between early-onset and regular-onset cancer. So if you can avoid established cancer risk factors, you can reduce the risk of other non-communicable disease. Early life exposure can be very important, so you can avoid such cancer risk factors as soon as possible. At the same time, you can teach such a beneficial, healthy lifestyle to younger populations. I think that’s important.
Laine Perfas: Thank you all for taking time to talk to me today about this.
Rebbeck: Thank you for having us.
Ng: Thank you for having us.
Laine Perfas: Thanks for listening. If you’d like to see a transcript of this episode or listen to our other episodes, visit harvard.edu/thinking. To support us, rate us on Apple Podcasts and Spotify, or share this episode with a friend or colleague. This episode was hosted and produced by me, Samantha Laine Perfas, with production and editing support from Sarah Lamodi, edited by Ryan Mulcahy and Paul Makishima. Original music and sound designed by Noel Flatt, produced by Harvard University. Copyright 2025.
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.Those data aren’t enough to teach a robot to be a helpful household or f
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.
Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.
Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.
How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.
“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”
In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.
Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.
Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”
The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.
According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”
Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.
While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.
To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.
“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”
“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”
Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.
The “steerable scene generation” system creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world robot interactions and scenarios.
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.Those data aren’t enough to teach a robot to be a helpful household or f
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.
Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.
Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.
How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.
“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”
In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.
Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.
Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”
The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.
According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”
Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.
While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.
To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.
“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”
“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”
Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.
The “steerable scene generation” system creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world robot interactions and scenarios.
Science & Tech
‘Human exceptionalism is at the root of the ecological crisis’
Saving the planet requires getting over ourselves, argues author of ‘The Arrogant Ape’
Kermit Pattison
Harvard Staff Writer
October 8, 2025
5 min read
In the grand story of evolution, the crowning human distinction is our big brain. But our large heads have been slow to recognize a less admirable trait of
‘Human exceptionalism is at the root of the ecological crisis’
Saving the planet requires getting over ourselves, argues author of ‘The Arrogant Ape’
Kermit Pattison
Harvard Staff Writer
5 min read
In the grand story of evolution, the crowning human distinction is our big brain. But our large heads have been slow to recognize a less admirable trait of Homo sapiens — self-centeredness.
The human presumption of superiority and entitlement to exploit the natural world is deeply rooted in our religious, cultural, and scientific traditions — and now we are witnessing the consequences, said Christine Webb, a former Harvard lecturer and author of “The Arrogant Ape: The Myth of Human Exceptionalism and Why It Matters.”
“Human exceptionalism is at the root of the ecological crisis,” Webb told a Science Center audience of more than 100 people recently as part of the Harvard Science Book Talks. “This pervasive mindset gives humans a sense of dominion over the rest of nature, set apart from and entitled to commodify the Earth and other species for their own exclusive use.”
The central thesis of her book is that anthropocentrism — or what Webb calls the “human superiority complex” — has pushed our planet to environmental crises such as mass extinctions, rising sea levels, forest fires, and more.
“I’ve come to think of the arrogant ape not as a species, or a culture, or even an individual, but as a tragic protagonist in a Greek drama, blinded by their own hubris,” said Webb. “This unfortunate and dangerous way of viewing our world is a brainwashing of such major proportions that many people remain entirely unaware of it.”
“When you measure the world with a ruler made for humans other species will inevitably look inferior.”
Without doubt, humans are unique in many attributes (we are the only species known to send rockets into space or convene book talks). But all species, Webb wrote, have evolved specialized adaptations to their environments and are wondrous in their own rights. Still, we humans tend to see our own characteristics as more exalted — and, thanks to our technological prowess — view the rest of the natural world as a resource that we are entitled to harvest without constraint.
As Webb wrote, “Human exceptionalism suggests that what is distinctive about humans is more worthy and advanced than the distinguishing features of other forms of life.”
Now an assistant professor at New York University, Webb previously served as a lecturer in Harvard’s Department of Human Evolutionary Biology. The book grew out of her experience teaching an undergraduate seminar here also titled “The Arrogant Ape.”
“So many of the ideas that are folded into this book are students’ ideas,” she told the audience. “I was incredibly inspired by the discussions that took place.”
The book traces how the human sense of exceptionalism has deep roots in the Judeo-Christian religious tradition, Western thought, and even science.
Shakespeare’s Hamlet called humans “the paragon of animals.” In the 18th century, Carl Linnaeus, the founder of biological classification, designated the taxonomic order that includes humans, apes, and monkeys as “primates” to assign us first rank and dubbed our species “Homo sapiens” or “the wise man.” In the 1730s, poet Alexander Pope advised that, “The proper study of mankind is man.” Accordingly, the humanities celebrate the study of you-know-who.
The very notion of “progress” came to mean human command over nature. Thanks to our ever-advancing scientific and technological knowledge — and a global population that has now reached 8 billion — humans lay claim to an ever-greater share of the world’s resources. As Webb wrote, “the notion of human distinction and the exploitation of the natural world go hand in hand.”
Human exceptionalism has become an unquestioned assumption — something rarely articulated or opened to debate. As Webb told the audience, “it derives power from its invisibility.”
Science too has absorbed this bias. Two centuries ago, Charles Darwin warned of the human habit of flattering ourselves with self-affirming categorizations, but generations of evolutionists continued falling prey to the same old traps. According to Webb, a primatologist who has studied wild baboons and gorillas in Africa, comparative studies often are designed with confirmation bias or use human attributes as metrics of evolutionary advancement.
“When you measure the world with a ruler made for humans,” she said, “other species will inevitably look inferior.” Webb drew a laugh when she showed a clip from the satirical newspaper The Onion headlined “Study: Dolphins Not So Intelligent On Land.”
Yet, Webb argued, the human presumption of superiority is a learned behavior. Many children exhibit a natural empathy for animals and humans have an innate sense of wonder for nature that Harvard biologist E.O. Wilson termed “biophilia.”
The remedy to our ecological crisis, she believes, is embracing a trait that is often undervalued: humility. In reawakening ourselves to the wondrous diversity of nature, we might become more willing to preserve it.
“Shedding this anthropocentric lens, I believe, can yield very humbling realizations,” she concluded in her talk, “and this humility might impart true wisdom — the quality our species, Homo sapiens, has assigned itself, yet one we can only ever truly realize by unlearning human exceptionalism.”
Science & Tech
Lauren Williams awarded MacArthur ‘genius grant’
Stephanie Mitchell/Harvard Staff Photographer
Kermit Pattison
Harvard Staff Writer
October 8, 2025
5 min read
Math professor honored for theoretical breakthroughs with sometimes surprising applications across phenomena such as tsunamis, traffic
Lauren Williams ’00 is a theoretical mathematician and recently she felt stu
Math professor honored for theoretical breakthroughs with sometimes surprising applications across phenomena such as tsunamis, traffic
Lauren Williams ’00 is a theoretical mathematician and recently she felt stuck in her research, a recurring frustration for a scholar who wrestles with difficult conceptual problems.
Then, as Williams worked quietly in her home office, she was jolted by an unexpected revelation: The MacArthur Foundation phoned to inform Williams that she had won a celebrated “genius grant” — a “no-strings-attached” fellowship that provides recipients $800,000 over five years.
“I was completely shocked,” recalled Williams, Dwight Parker Robinson Professor of Mathematics. “I was just sort of trying to verify for myself that I was awake, and this was real.”
Williams was one of 22 fellows announced Wednesday. The MacArthur Foundation credited Williams for “elucidating unexpected connections” between her field of algebraic combinatorics and other areas in math and physics.
The foundation said: “With a curiosity-driven approach to research and willingness to collaborate across disciplines, Williams is expanding fundamental mathematical theory and building fruitful connections between mathematics and other scientific fields.”
Williams leading a class in 2018.
Harvard file photo
Williams specializes in algebra and combinatorics and how they can be applied to problems in math and physics. Simply put, combinatorics is the study of discrete, finite things that can be counted as opposed to things that are continuous — think the continuous surface of the ocean vs. the waves.
Much of her work involves the “positive Grassmannian,” a geometric shape whose points represent simpler geometric objects.
Other scholars have discovered that her theoretical work applies to a diverse array of phenomena such as shallow water waves, tsunamis, collisions of fundamental particles, protein synthesis, and the flow of traffic on one-way streets.
Williams remains both fascinated and perplexed about why the positive Grassmannian keeps popping up in such disparate domains. “It’s one of the biggest mysteries that I’ve encountered,” she said.
Williams draws inspiration from a quotation from British mathematician G.H. Hardy: “The mathematician’s patterns, like the painter’s or the poet’s must be beautiful; the ideas like the colours or the words, must fit together in a harmonious way. Beauty is the first test: There is no permanent place in the world for ugly mathematics.”
“If you ask a question and the answer is not beautiful, that means you asked the wrong question.”
At an aesthetic level, Williams sees many parallels between her work in pure math and the arts: all solve puzzles within the rules of the medium; all involve patterns, beauty, and harmony.
“If you ask a question and the answer is not beautiful, that means you asked the wrong question,” she said.
Raised in the suburbs of Los Angeles, Williams grew up as a voracious reader, aspiring writer, and violinist. In fourth grade, she discovered she was good at math when she won her school district competition, and she continued pursuing her interest in summer programs and math competitions.
As a Harvard undergraduate, she majored in mathematics at a time when there were no women on the department faculty.
After earning her Ph.D. at MIT, she returned to Harvard on a Benjamin Peirce postdoctoral fellowship. She spent nine years on the faculty at the University of California at Berkeley before returning to Harvard in 2018 as only the second female tenured professor in the history of the Math Department.
With the MacArthur grant turning a spotlight on her corner of academia, Williams hopes to inspire other young women who aspire to careers in mathematics.
“When I was a student, I worried quite a lot about whether I could get an academic job and whether a career in academia was compatible with having children,” said Williams, now a mother of two. “To the students who are interested in a career in math academia but are worried about the same issues, I would very much encourage them to go for it.”
Her career has been a balancing act — sometimes quite literally. As a young professor, Williams had an infant who was a fitful sleeper and night after night, she and her husband took turns rocking the child to sleep by bobbing on a yoga ball.
“If you’re holding a baby in a dark room, gently bouncing, there’s nothing you can do except think,” said Williams. “One night, I started thinking about a question and that actually led to my next paper.”
(Also among this year’s MacArthur cohort was Hahrie Han ’97, a political science professor at Johns Hopkins University who was recognized for her research into how people engage in civil and political affairs.)
For Williams, the award comes at an opportune moment. She had three federal research grants terminated in May — including one for a conference scheduled for the following month, forcing her and her colleagues to scramble to reorganize the event.
“This award really couldn’t come at a better time, personally,” she said.
At a larger level, the recognition is about more than one scholar. Like her beloved positive Grassmannian, her achievement reflects those of many others.
“I’m shocked and honored, but also just incredibly grateful to the dozens, if not hundreds, of amazing teachers, mentors, collaborators, friends, and family members who have supported me,” said Williams. “The biggest overwhelming reaction for me is really just gratitude.”
The two Princeton alums are among 20 scientists, artists and scholars from across the country selected for 2025 MacArthur Fellowships by the John D. and Catherine T. MacArthur Foundation.
The two Princeton alums are among 20 scientists, artists and scholars from across the country selected for 2025 MacArthur Fellowships by the John D. and Catherine T. MacArthur Foundation.
Science & Tech
Chilling discovery
Suk Hyun Sung (left) and Ismail El Baggari.Photos by Niles Singer/Harvard Staff Photographer
Clea Simon
Harvard Correspondent
October 8, 2025
6 min read
Physicists go to extremes to capture quantum materials
Researchers at the Rowland Institute at Harvard have pioneered a new way to achieve the coolest possible temperatures to image materials at sub
Physicists go to extremes to capture quantum materials
Researchers at the Rowland Institute at Harvard have pioneered a new way to achieve the coolest possible temperatures to image materials at sub-atomic scale. In combining the technical know-how of Rowland staff scientists with collaborators at the University of Michigan, the findings make possible a new era of ultra-cold microscopy.
Cryogenic transmission electron microscopy — TEM — has long played a vital role in many branches of science, from biology to physics, because the very low temperatures allow close examination of samples of everything from inorganic crystals to complex biomolecules at the atomic scale. Typically, the cryogen, or cooling agent, is liquid nitrogen, which boils at 321 below zero Fahrenheit (or 77 Kelvin) — impressive, but not cold enough to see those strange quantum wriggles.
That’s why scientists have strived over the last decade to go colder by using liquid helium, which boils at 421 below zero Fahrenheit, or 4 Kelvin, and is very close to “absolute zero.” But this comes with serious technical problems that affect the mechanical stability of the microscope.
Harvard researchers envisioned a new way to use liquid helium for a more stable approach to this high-level microscopy, a breakthrough explained in a paper published last month in PNAS. Led by principal investigator and Rowland fellow Ismail El Baggari, the team built novel partnerships to create a usable new technology, both within the institute and outside, with University of Michigan Professor Robert Hovden. The research is among the first high-impact papers (another was robot flies) since Rowland moved to Harvard’s science campus a year ago.
The original concept of cryogenic cooling during microscopy to preserve the structure of biological molecules — which led to the 2017 Nobel Prize for chemistry — was to use it to study cells, proteins, and other soft matter. However, El Baggari and Suk Hyun Sung, a postdoc on the team, saw its potential in quantum physics.
“We’re physicists and we’re interested in imaging weird materials called quantum materials that have different properties at low temperatures,” said El Baggari, noting that electrons exhibit quantum behavior, only at extremely low temperatures. “And so we need to cool down the materials to access those new properties.” Liquid helium cooling was seen as a means to do this.
To utilize liquid helium, however, the researchers had to have an electron microscope that would allow such cooling. “In the past, we’ve used electron microscopy to image the atomic structure of these materials at high temperatures, but it turns out that it’s very difficult to get high-resolution images with liquid helium cooling,” El Baggari said.
“Liquid helium evaporates so quickly and so easily any external heat vaporizes it,” he added. “That then introduces vibrations as bubbles form, as the flow is disturbed by a mixture of gas and liquid.”
These vibrations blurred the images in the way a bubbling glass saucepan of boiling water obscures the view through the pot, though that’s visible when the water is cold. The fast evaporation also meant that the cold temperatures could be maintained for only about 20 minutes, far too short for meaningful measurements.
“We built a geophone for the microscope that is so sensitive, it can detect vibrations within fractions of atomic distance.”
Winfield Hill, director of electronic engineering
The vibration sensor.
The electron microscope specimen holder.
To solve this problem, the researchers needed to create a method that would allow the use of liquid helium, but in a way that didn’t allow vibrations. The researchers collaborated with Rowland manager Erik Madsen; director of electronic engineering Winfield Hill; and Alan Stern, a staff computational scientist. Together, they built a system that could maintain cold temperatures for hours on end. They also tracked and isolated the vibrations of the cooling system from reaching the specimen.
“We had the general ideas, but we didn’t know at the start the details of cryogenic design, machining to high tolerances, and precision electronics,” said El Baggari. “We had to develop new tools and processes to begin tackling our problems.”
“Ismail had the idea to use geophones to detect minuscule vibrations in the ultra-cold TEM set-up, up to identify places to dampen the ‘noise,’ particularly on the liquid helium input. So we built a geophone for the microscope that is so sensitive, it can detect vibrations within fractions of atomic distance — exactly the scale needed for this microscope to be able to image clearly at sub-atomic resolution,” said Hill.
The stakes were high because of the complexity of the machines. “These microscopes are multimillion-dollar machines, and it was quite nerve-wracking to think about inserting our own new instrument in what has never been tested,” said Sung.
Into this huge and delicate machine, he continued, the builders had to create site entry holders that “would fit in like a glass slide fits into an ordinary microscope — without breaking anything.” The sample had to land in the right position of within 25 microns, or a third of the width of human hair. Any deviations meant the sample could not be imaged, vibrations were excessive, or vacuum levels were poor.
“For the first months of building things, we were just measuring commercially available site entry holders, making sure that all the dimensions were correct,” continued Sung. “We didn’t know what the critical dimensions were because all the holders were slightly different, and we didn’t know what actually mattered.” After three years of testing the sample holder and improving vibration isolation, the team was able to capture the first high-resolution images in quantum materials while operating for many hours at ultracool temperatures.
“Suddenly we were studying materials that we historically had only been able to examine at room temperature,” said El Baggari. “We were never able to image them while they were actually exhibiting the emergent properties that we and our collaborators are interested in. So now we can actually use the electron microscope as a primary tool to directly tackle these quantum electronic phases in materials.”
Continued improvements of the new tool could open new fields of inquiry, such as simultaneously applying a voltage to working devices or manipulating materials by stretching or pressing on them. “The microscopy experiments that we’ve been routinely doing at room temperatures, we can now do it at low temperatures where things get even more exciting,” he said.
The research described in this article was partially funded by the National Science Foundation and the U.S. Department of Energy.
Every time you check the time on your phone, make an online transaction, or use a navigation app, you are depending on the precision of atomic clocks.An atomic clock keeps time by relying on the “ticks” of atoms as they naturally oscillate at rock-steady frequencies. Today’s atomic clocks operate by tracking cesium atoms, which tick over 10 billion times per second. Each of those ticks is precisely tracked using lasers that oscillate in sync, at microwave frequencies.Scientists are developing ne
Every time you check the time on your phone, make an online transaction, or use a navigation app, you are depending on the precision of atomic clocks.
An atomic clock keeps time by relying on the “ticks” of atoms as they naturally oscillate at rock-steady frequencies. Today’s atomic clocks operate by tracking cesium atoms, which tick over 10 billion times per second. Each of those ticks is precisely tracked using lasers that oscillate in sync, at microwave frequencies.
Scientists are developing next-generation atomic clocks that rely on even faster-ticking atoms such as ytterbium, which can be tracked with lasers at higher, optical frequencies. If they can be kept stable, optical atomic clocks could track even finer intervals of time, up to 100 trillion times per second.
Now, MIT physicists have found a way to improve the stability of optical atomic clocks, by reducing “quantum noise” — a fundamental measurement limitation due to the effects of quantum mechanics, which obscures the atoms’ pure oscillations. In addition, the team discovered that an effect of a clock’s laser on the atoms, previously considered irrelevant, can be used to further stabilize the laser.
The researchers developed a method to harness a laser-induced “global phase” in ytterbium atoms, and have boosted this effect with a quantum-amplification technique. The new approach doubles the precision of an optical atomic clock, enabling it to discern twice as many ticks per second compared to the same setup without the new method. What’s more, they anticipate that the precision of the method should increase steadily with the number of atoms in an atomic clock.
The researchers detail the method, which they call global phase spectroscopy, in a study appearing today in the journal Nature. They envision that the clock-stabilizing technique could one day enable portable optical atomic clocks that can be transported to various locations to measure all manner of phenomena.
“With these clocks, people are trying to detect dark matter and dark energy, and test whether there really are just four fundamental forces, and even to see if these clocks can predict earthquakes,” says study author Vladan Vuletić, the Lester Wolfe Professor of Physics at MIT. “We think our method can help make these clocks transportable and deployable to where they’re needed.”
The paper’s co-authors are Leon Zaporski, Qi Liu, Gustavo Velez, Matthew Radzihovsky, Zeyang Li, Simone Colombo, and Edwin Pedrozo-Peñafiel, who are members of the MIT-Harvard Center for Ultracold Atoms and the MIT Research Laboratory of Electronics.
Ticking time
In 2020, Vuletić and his colleagues demonstrated that an atomic clock could be made more precise by quantumly entangling the clock’s atoms. Quantum entanglement is a phenomenon by which particles can be made to behave in a collective, highly correlated manner. When atoms are quantumly entangled, they redistribute any noise, or uncertainty in measuring the atoms’ oscillations, in a way that reveals a clearer, more measurable “tick.”
In their previous work, the team induced quantum entanglement among several hundred ytterbium atoms that they first cooled and trapped in a cavity formed by two curved mirrors. They sent a laser into the cavity, which bounced thousands of times between the mirrors, interacting with the atoms and causing the ensemble to entangle. They were able to show that quantum entanglement could improve the precision of existing atomic clocks by essentially reducing the noise, or uncertainty between the laser’s and atoms’ tick rates.
At the time, however, they were limited by the ticking instability of the clock’s laser. In 2022, the same team derived a way to further amplify the difference in laser versus atom tick rates with “time reversal” — a trick that relies on entangling and de-entangling the atoms to boost the signal acquired in between.
However, in that work the team was still using traditional microwaves, which oscillate at much lower frequencies than the optical frequency standards ytterbium atoms can provide. It was as if they had painstakingly lifted a film of dust off a painting, only to then photograph it with a low-resolution camera.
“When you have atoms that tick 100 trillion times per second, that’s 10,000 times faster than the frequency of microwaves,” Vuletić says. “We didn’t know at the time how to apply these methods to higher-frequency optical clocks that are much harder to keep stable.”
About phase
In their new study, the team has found a way to apply their previously developed approach of time reversal to optical atomic clocks. They then sent in a laser that oscillates near the optical frequency of the entangled atoms.
“The laser ultimately inherits the ticking of the atoms,” says first author Zaporski. “But in order for this inheritance to hold for a long time, the laser has to be quite stable.”
The researchers found they were able to improve the stability of an optical atomic clock by taking advantage of a phenomenon that scientists had assumed was inconsequential to the operation. They realized that when light is sent through entangled atoms, the interaction can cause the atoms to jump up in energy, then settle back down into their original energy state and still carry the memory about their round trip.
“One might think we’ve done nothing,” Vuletić says. “You get this global phase of the atoms, which is usually considered irrelevant. But this global phase contains information about the laser frequency.”
In other words, they realized that the laser was inducing a measurable change in the atoms, despite bringing them back to the original energy state, and that the magnitude of this change depends on the laser’s frequency.
“Ultimately, we are looking for the difference of laser frequency and the atomic transition frequency,” explains co-author Liu. “When that difference is small, it gets drowned by quantum noise. Our method amplifies this difference above this quantum noise.”
In their experiments, the team applied this new approach and found that through entanglement they were able to double the precision of their optical atomic clock.
“We saw that we can now resolve nearly twice as small a difference in the optical frequency or, the clock ticking frequency, without running into the quantum noise limit,” Zaporski says. “Although it’s a hard problem in general to run atomic clocks, the technical benefits of our method it will make it easier, and we think this can enable stable, transportable atomic clocks.”
This research was supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, the U.S. Office of Science, the National Quantum Information Science Research Centers, and the Quantum Systems Accelerator.
Computer scientists at ETH Zurich have developed a digital tool capable of searching through millions of published DNA records in a matter of seconds. This can significantly accelerate research into antibiotic resistance and unknown pathogens.
Computer scientists at ETH Zurich have developed a digital tool capable of searching through millions of published DNA records in a matter of seconds. This can significantly accelerate research into antibiotic resistance and unknown pathogens.
For decades, it’s been known that subtle chemical patterns exist in metal alloys, but researchers thought they were too minor to matter — or that they got erased during manufacturing. However, recent studies have shown that in laboratory settings, these patterns can change a metal’s properties, including its mechanical strength, durability, heat capacity, radiation tolerance, and more.Now, researchers at MIT have found that these chemical patterns also exist in conventionally manufactured metals
For decades, it’s been known that subtle chemical patterns exist in metal alloys, but researchers thought they were too minor to matter — or that they got erased during manufacturing. However, recent studies have shown that in laboratory settings, these patterns can change a metal’s properties, including its mechanical strength, durability, heat capacity, radiation tolerance, and more.
Now, researchers at MIT have found that these chemical patterns also exist in conventionally manufactured metals. The surprising finding revealed a new physical phenomenon that explains the persistent patterns.
In a paper published in Nature Communications today, the researchers describe how they tracked the patterns and discovered the physics that explains them. The authors also developed a simple model to predict chemical patterns in metals, and they show how engineers could use the model to tune the effect of such patterns on metallic properties, for use in aerospace, semiconductors, nuclear reactors, and more.
“The conclusion is: You can never completely randomize the atoms in a metal. It doesn’t matter how you process it,” says Rodrigo Freitas, the TDK Assistant Professor in the Department of Materials Science and Engineering. “This is the first paper showing these non-equilibrium states that are retained in the metal. Right now, this chemical order is not something we’re controlling for or paying attention to when we manufacture metals.”
For Freitas, an early-career researcher, the findings offer vindication for exploring a crowded field that he says few believed would lead to unique or broadly impactful results. He credits the U.S. Air Force Office of Scientific Research, which supported the work through their Young Investigator Program. He also credits the collaborative effort that enabled the paper, which features three MIT PhD students as co-first authors: Mahmudul Islam, Yifan Cao, and Killian Sheriff.
“There was the question of whether I should even be tackling this specific problem because people have been working on it for a long time,” Freitas says. “But the more I learned about it, the more I saw researchers were thinking about this in idealized laboratory scenarios. We wanted to perform simulations that were as realistic as possible to reproduce these manufacturing processes with high fidelity. My favorite part of this project is how non-intuitive the findings are. The fact that you cannot completely mix something together, people didn’t see that coming.”
From surprises to theories
Freitas’ research team began with a practical question: How fast do chemical elements mix during metal processing? Conventional wisdom held that there’s a point where the chemical composition of metals becomes completely uniform from mixing during manufacturing. By finding that point, the researchers thought they could develop a simple way to design alloys with different levels of atomic order, also known as short-range order.
The researchers used machine-learning techniques to track millions of atoms as they moved and rearranged themselves under conditions that mimicked metal processing.
“The first thing we did was to deform a piece of metal,” Freitas explains. “That’s a common step during manufacturing: You roll the metal and deform it and heat it up again and deform it a little more, so it develops the structure you want. We did that and we tracked chemical order. The thought was as you deform the material, its chemical bonds are broken and that randomizes the system. These violent manufacturing processes essentially shuffle the atoms.”
The researchers hit a snag during the mixing process: The alloys never reached a fully random state. That was a surprise, because no known physical mechanism could explain the result.
“It pointed to a new piece of physics in metals,” the researchers write in the paper. “It was one of those cases where applied research led to a fundamental discovery.”
To uncover the new physics, the researchers developed computational tools, including high-fidelity machine-learning models, to capture atomic interactions, along with new statistical methods that quantify how chemical order changes over time. They then applied these tools in large-scale molecular dynamics simulations to track how atoms rearrange during processing.
The researchers found some standard chemical arrangements in their processed metals, but at higher temperatures than would normally be expected. Even more surprisingly, they found completely new chemical patterns never seen outside of manufacturing processes. This was the first time such patterns were observed. The researchers referred to the patterns as “far-from-equilibrium states.”
The researchers also built a simple model that reproduced key features of the simulations. The model explains how the chemical patterns arise from defects known as dislocations, which are like three-dimensional scribbles within a metal. As the metal is deformed, those scribbles warp, shuffling nearby atoms along the way. Previously, researchers believed that shuffling completely erased order in the metals, but they found that dislocations favor some atomic swaps over others, resulting not in randomness but in subtle patterns that explain their findings.
“These defects have chemical preferences that guide how they move,” Freitas says. “They look for low energy pathways, so given a choice between breaking chemical bonds, they tend to break the weakest bonds, and it’s not completely random. This is very exciting because it’s a non-equilibrium state: It’s not something you’d see naturally occurring in materials. It’s the same way our bodies live in non-equilibrium. The temperature outside is always hotter or colder than our bodies, and we’re maintaining that steady state equilibrium to stay alive. That’s why these states exist in metal: the balance between an internal push toward disorder plus this ordering tendency of breaking certain bonds that are always weaker than others.”
Applying a new theory
The researchers are now exploring how these chemical patterns develop across a wide range of manufacturing conditions. The result is a map that links various metal processing steps to different chemical patterns in metal.
To date, this chemical order and the properties they tune have been largely considered an academic subject. With this map, the researchers hope engineers can begin thinking of these patterns as levers in design that can be pulled during production to get new properties.
“Researchers have been looking at the ways these atomic arrangements change metallic properties — a big one is catalysis,” Freitas says of the process that drives chemical reactions. “Electrochemistry happens at the surface of the metal, and it’s very sensitive to local atomic arrangements. And there have been other properties that you wouldn't think would be influenced by these factors. Radiation damage is another big one. That affects these materials’ performance in nuclear reactors.”
Researchers have already told Freitas the paper could help explain other surprise findings about metallic properties, and he’s excited for the field to move from fundamental research into chemical order to more applied work.
“You can think of areas where you need very optimized alloys like aerospace,” Freitas says. “They care about very specific compositions. Advanced manufacturing now makes it possible to combine metals that normally wouldn’t mix through deformation. Understanding how atoms actually shuffle and mix in those processes is crucial, because it’s the key to gaining strength while still keeping the low density. So, this could be a huge deal for them.”
This work was supported, in part, by the U.S. Air Force Office of Scientific Research, MathWorks, and the MIT-Portugal Program.
A computer simulation shows metallic alloy where atoms (colored spheres) are arranged in subtle chemical patterns beneath a network of dislocations (green lines). These tangled defects move during processing and help create the nonequilibrium atomic order discovered by the MIT team.
One of the newest weapons that scientists have developed against cancer is a type of engineered immune cell known as CAR-NK (natural killer) cells. Similar to CAR-T cells, these cells can be programmed to attack cancer cells.MIT and Harvard Medical School researchers have now come up with a new way to engineer CAR-NK cells that makes them much less likely to be rejected by the patient’s immune system, which is a common drawback of this type of treatment.The new advance may also make it easier to
One of the newest weapons that scientists have developed against cancer is a type of engineered immune cell known as CAR-NK (natural killer) cells. Similar to CAR-T cells, these cells can be programmed to attack cancer cells.
MIT and Harvard Medical School researchers have now come up with a new way to engineer CAR-NK cells that makes them much less likely to be rejected by the patient’s immune system, which is a common drawback of this type of treatment.
The new advance may also make it easier to develop “off-the-shelf” CAR-NK cells that could be given to patients as soon as they are diagnosed. Traditional approaches to engineering CAR-NK or CAR-T cells usually take several weeks.
“This enables us to do one-step engineering of CAR-NK cells that can avoid rejection by host T cells and other immune cells. And, they kill cancer cells better and they’re safer,” says Jianzhu Chen, an MIT professor of biology, a member of the Koch Institute for Integrative Cancer Research,and one of the senior authors of the study.
In a study of mice with humanized immune systems, the researchers showed that these CAR-NK cells could destroy most cancer cells while evading the host immune system.
Rizwan Romee, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, is also a senior author of the paper, which appears today in Nature Communications. The paper’s lead author is Fuguo Liu, a postdoc at the Koch Institute and a research fellow at Dana-Farber.
Evading the immune system
NK cells are a critical part of the body’s natural immune defenses, and their primary responsibility is to locate and kill cancer cells and virus-infected cells. One of their cell-killing strategies, also used by T cells, is a process called degranulation. Through this process, immune cells release a protein called perforin, which can poke holes in another cell to induce cell death.
To create CAR-NK cells to treat cancer patients, doctors first take a blood sample from the patient. NK cells are isolated from the sample and engineered to express a protein called a chimeric antigen receptor (CAR), which can be designed to target specific proteins found on cancer cells.
Then, the cells spend several weeks proliferating until there are enough to transfuse back into the patient. A similar approach is also used to create CAR-T cells. Several CAR-T cell therapies have been approved to treat blood cancers such as lymphoma and leukemia, but CAR-NK treatments are still in clinical trials.
Because it takes so long to grow a population of engineered cells that can be infused into the patient, and those cells may not be as viable as cells that came from a healthy person, researchers are exploring an alternative approach: using NK cells from a healthy donor.
Such cells could be grown in large quantities and would be ready whenever they were needed. However, the drawback to these cells is that the recipient’s immune system may see them as foreign and attack them before they can start killing cancer cells.
In the new study, the MIT team set out to find a way to help NK cells “hide” from a patient’s immune system. Through studies of immune cell interactions, they showed that NK cells could evade a host T-cell response if they did not carry surface proteins called HLA class 1 proteins. These proteins, usually expressed on NK cell surfaces, can trigger T cells to attack if the immune system doesn’t recognize them as “self.”
To take advantage of this, the researchers engineered the cells to express a sequence of siRNA (short interfering RNA) that interferes with the genes for HLA class 1. They also delivered the CAR gene, as well as the gene for either PD-L1 or single-chain HLA-E (SCE). PD-L1 and SCE are proteins that make NK cells more effective by turning up genes that are involved in killing cancer cells.
All of these genes can be carried on a single piece of DNA, known as a construct, making it simple to transform donor NK cells into immune-evasive CAR-NK cells. The researchers used this construct to create CAR-NK cells targeting a protein called CD-19, which is often found on cancerous B cells in lymphoma patients.
NK cells unleashed
The researchers tested these CAR-NK cells in mice with a human-like immune system. These mice were also injected with lymphoma cells.
Mice that received CAR-NK cells with the new construct maintained the NK cell population for at least three weeks, and the NK cells were able to nearly eliminate cancer in those mice. In mice that received either NK cells with no genetic modifications or NK cells with only the CAR gene, the host immune cells attacked the donor NK cells. In these mice, the NK cells died out within two weeks, and the cancer spread unchecked.
The researchers also found that these engineered CAR-NK cells were much less likely to induce cytokine release syndrome — a common side effect of immunotherapy treatments, which can cause life-threatening complications.
Because of CAR-NK cells’ potentially better safety profile, Chen anticipates that they could eventually be used in place of CAR-T cells. For any CAR-NK cells that are now in development to target lymphoma or other types of cancer, it should be possible to adapt them by adding the construct developed in this study, he says.
The researchers now hope to run a clinical trial of this approach, working with colleagues at Dana-Farber. They are also working with a local biotech company to test CAR-NK cells to treat lupus, an autoimmune disorder that causes the immune system to attack healthy tissues and organs.
The research was funded, in part, by Skyline Therapeutics, the Koch Institute Frontier Research Program through the Kathy and Curt Marble Cancer Research Fund and the Elisa Rah (2004, 2006) Memorial Fund, the Claudia Adams Barr Foundation, and the Koch Institute Support (core) Grant from the National Cancer Institute.
By Ms Nguyen Tran Bao Phuong, Research Fellow, and Ms Tran Thi Ngoc My, Research Analyst, both at the Asia Competitiveness Institute, Lee Kuan Yew School of Public Policy at NUSThe Business Times, 7 October 2025, p18
By Ms Nguyen Tran Bao Phuong, Research Fellow, and Ms Tran Thi Ngoc My, Research Analyst, both at the Asia Competitiveness Institute, Lee Kuan Yew School of Public Policy at NUS
Capital 95.8FM, 30 September 2025Hao 96.3FM, 30 September 20258world Online, 30 September 2025The Straits Times, 1 October 2025, Singapore, pA14Lianhe Zaobao, 1 October 2025, Singapore, p7
NUS and OceanX have embarked on a 24-day voyage to the Monsoon Rise in the Christmas Island Seamount Province, a largely unexplored seamount chain in the eastern Indian Ocean. This is Singapore’s first deep-sea scientific expedition since signing the United Nations Biodiversity Beyond National Jurisdiction (BBNJ) Agreement.A concerted effort to explore unchartered watersOn board the state-of-the-art research vessel − the OceanXplorer − 14 NUS researchers are joined by 2 researchers from NTU, 2 f
NUS and OceanX have embarked on a 24-day voyage to the Monsoon Rise in the Christmas Island Seamount Province, a largely unexplored seamount chain in the eastern Indian Ocean. This is Singapore’s first deep-sea scientific expedition since signing the United Nations Biodiversity Beyond National Jurisdiction (BBNJ) Agreement.
A concerted effort to explore unchartered waters
On board the state-of-the-art research vessel − the OceanXplorer − 14 NUS researchers are joined by 2 researchers from NTU, 2 from Indonesia, and 1 each from Thailand, Vietnam and Fiji.
On 4 October 2025, a ceremonial send-off on the OceanXplorer at Marina @ Keppel Bay was graced by government leaders, scientists, and partners. Dr Vivian Balakrishnan, Minister for Foreign Affairs; Mr Heng Swee Keat, Chairman of National Research Foundation (NRF); Professor Tan Eng Chye, NUS President; Mr Ray Dalio, Co-Founder of OceanX; and Mr Mark Dalio, Co-Founder and Co-CEO of OceanX, were among the distinguished guests who attended the event.
“This expedition demonstrates Singapore’s commitment to the conservation and sustainable use of marine biodiversity in areas beyond national jurisdiction, especially following the adoption of the BBNJ Agreement in 2023 and its imminent entry into force in January 2026. Such cutting-edge research is critical in building the scientific foundation needed to understand, protect, and preserve marine biodiversity,” said Dr Balakrishnan.
“Singapore is committed to the inclusive implementation of the BBNJ Agreement which taps on the expertise of ASEAN and Small Island Developing States. The collaborative approach will contribute to capacity building, and the fair and equitable sharing of benefits from marine scientific research in the high seas,” added Dr Balakrishnan.
This expedition and the development of deep-sea research capacity in Singapore is supported by a S$6 million grant from the NRF.
“This mission is more than a voyage of discovery — it is a contribution to advancing collaboration and ocean conservation, for a better future of our planet,” said NUS President Prof Tan.
Diving into greater depths
Launched in 2016, OceanX is a nonprofit initiative that combines ocean exploration, science, education and storytelling to uncover the mysteries of the ocean, unlock its potential and inspire greater protection of it. A floating laboratory equipped with remotely operated vehicles and submersibles that can reach maximum depths of 6,000m and 1,000m respectively, the OceanXplorer have since conducted expeditions in the Red Sea, and in the seas around Malaysia, Indonesia and Africa.
“OceanXplorer was designed as a platform to unite exploration, science, media, and education. Partnering with NUS on this mission allows us to expand knowledge of one of the last great unknowns of our planet and inspire people across Asia and the world to protect it,” said Mr Mark Dalio, Founder and Co-CEO of OceanX.
Led by Dr Tan Koh Siang, Chief Expedition Scientist and Head of the Marine Biology and Ecology Laboratory at the NUS Tropical Marine Science Institute (TMSI), the 24-day mission also includes researchers from Vietnam National University, Indonesia’s Diponegoro University and National Research and Innovation Agency, Thailand’s Kasetsart University, and Fiji’s Social Empowerment Education Programme.
“For Singapore and NUS, this expedition is both a scientific breakthrough and a regional milestone. It strengthens our deep-sea research capacity, builds collaboration across ASEAN, and positions Singapore as a hub for international marine science,” said Professor Peter Ng, mission lead from the Lee Kong Chian Natural History Museum (LKCNHM) and TMSI at NUS.
Uncovering seamount secrets
Seamounts are massive underwater mountains with steep sides rising from the seafloor. Often formed by extinct volcanoes, seamounts are known to be teeming with marine life. Their solid surfaces allow corals and sponges to grow, providing small sea creatures with a home, which in turn attracts larger predators. Although there are over 10,000 seamounts globally, these underwater oases remain poorly documented. Establishing a scientific record of the marine life in these deep-sea regions is crucial for shaping conservation and sustainable management strategies.
The expedition aims to characterise the biodiversity over more than 17,000 square kilometres of deep-sea terrain. Specially constructed traps – made of a system of traps within a trap – will be deployed to ensure that captured fish will be separated from their predators. Samples of seawater from different regions will also be collected to study eDNA (environmental DNA) – the trace amount of DNA shed by organisms from their skin, body fluids or waste. Extracting eDNA from water samples for sequencing and genetic analysis allows scientists to detect biodiversity without direct capture.
Specimens discovered from the expedition will be curated at LKCNHM and findings will be published in peer-reviewed and open access journals to support international ocean conservation and management efforts.
“The Indian Ocean remains one of the least studied parts of our planet. This mission gives us an unprecedented opportunity to gather insights that can transform how we understand biodiversity in this region,” said Dr Vincent Pieribone, Co-CEO and Chief Science Officer of OceanX.
In 2018, LKCNHM and TMSI collaborated with the Indonesian Institute of Sciences for a 14-day deep-sea biodiversity expedition. South Java Deep Sea Biodiversity Expedition (SJADES) surveyed 63 sites over a remarkable distance of 2,200 kilometres, yielding over 12,000 specimens from 800 species, including sponges, jellyfish, molluscs, starfish, worms, crabs, prawns and fish. Among these, more than 12 new species were discovered, and over 40 species were new records for Indonesia.
Last year, two NUS students had the privilege to live on board the OceanXplorer in the OceanX Education Young Explorers Programme (YEP) for 5 days, travelling from Jakarta to Bali. Participating in activities like seminars, lab work, job shadowing and even snorkelling, the eye-opening experience ignited their passion for marine biology and deepened their connection and understanding of the ocean.
Thirty years ago, Swiss physicist Didier Queloz discovered the first planet outside our solar system, revolutionising astrophysics. What the discovery has brought him and why he still hasn’t had enough.
Thirty years ago, Swiss physicist Didier Queloz discovered the first planet outside our solar system, revolutionising astrophysics. What the discovery has brought him and why he still hasn’t had enough.
In 2022, 2.3 million women were diagnosed with breast cancer worldwide and there were 670,000 related deaths. Despite significant progress in recent years, it remains challenging to accurately identify the best treatments for individual patients and to predict cases with poorer prognosis.
Whole genome sequencing is a powerful technique that involves analysing the DNA of both the patient and their tumour to look for genetic changes, or mutations. This provides information on the underlying cau
In 2022, 2.3 million women were diagnosed with breast cancer worldwide and there were 670,000 related deaths. Despite significant progress in recent years, it remains challenging to accurately identify the best treatments for individual patients and to predict cases with poorer prognosis.
Whole genome sequencing is a powerful technique that involves analysing the DNA of both the patient and their tumour to look for genetic changes, or mutations. This provides information on the underlying cause of the tumour and what is driving it. It can also provide valuable information to guide treatment, for example by identifying vulnerabilities in the tumour’s makeup or spotting signs that a patient might be resistant to a particular treatment.
Although the technology is rapidly becoming cheaper – Ultima Genomics has recently announced that it can sequence a human genome for US$100 – it is not widely used across the NHS. Offered through the NHS Genomic Medicine Service, it is currently available for a few adult cancers, rare cancers, paediatric cancers, and certain metastatic cancers.
Professor Serena Nik-Zainal from the Department of Genomic Medicine and Early Cancer Institute at the University of Cambridge said: “It is becoming increasingly possible to use whole genome sequencing to inform cancer management, but it’s arguably not being used to its full potential, and certainly not for some of the more common types of cancer.
“Part of the reason why is because we lack the clinical studies to support its use, but it’s also in part precisely because the information is so rich – in a sense, the information can be too overwhelming to make sense of.”
To help address these challenges, Professor Nik-Zainal and colleagues used data from almost 2,500 women from across England housed within the National Genomic Research Library – one of the world’s largest and most valuable data assets of its kind and run by Genomics England. The data from the 2,500 women came from their recruitment to the 100,000 Genomes Project and was linked to clinical and/or mortality records, tracking outcomes over five years. The researchers looked for genetic changes that cause or influence breast cancer, including problems in the way cells repair DNA.
The results of their study are published today in The Lancet Oncology.
The researchers found that 27% of breast cancer cases had genetic features that could help guide personalised treatment immediately, either with existing drugs or recruitment to prospective or current clinical trials. This equates to more than 15,000 women a year in the UK.
Among those features identified were: HRD (homology-directed repair deficiency), a DNA repair issue found in 12% of all breast cancers; unique mutations that could be targeted with specific drugs; signs of resistance to hormone therapy; and mutational patterns that suggest weaknesses in the cancer that treatments could exploit.
The team identified an additional 15% of cases that had features that could be useful for future research, such as problems with other DNA repair pathways. This would equate to more than 8,300 women a year.
The analysis also provided insights into prognosis. For example, in the most common subtype of breast cancer, known as ER+HER2- breast cancers, which account for approximately 70% of diagnoses, there were strong genetic indicators of how aggressive the cancer might be. For example, major structural DNA changes were linked to a much higher risk of death, as were APOBEC mutational signatures (a type of DNA damage pattern) and mutations in the cancer gene TP53. These genetic markers were more predictive than traditional measures like age of the patient, stage of their cancer, or tumour grade.
Using the results, the researchers created a framework to help doctors identify which patients need more aggressive treatment and which might safely have less treatment. It also suggested that around 7,500 women a year with low-grade tumours may benefit from more aggressive treatment.
Professor Nik-Zainal said: “The UK is a genuine world-leader in terms of its ability to do whole genome sequencing in the NHS through the Genomic Medicine Service. Now that we have population-level evidence of how impactful whole-genome sequencing could be, we have the potential to make a difference to thousands of patients’ lives every year, helping tailor their care more precisely, giving more treatment to those who need it and less to those who don’t.”
As well as being used to tailor treatments to individual patients, whole genome sequencing data could help transform how we recruit for and run clinical trials, speeding up the development of much needed new treatments.
Professor Nik-Zainal added: “At the moment, we test patients for just a small number of genetic mutations and may invite them to join a clinical trial if the patient has a mutation that matches the trial’s target. But if we have their entire genetic readout instead, we will no longer be restricted to single trials with a specific target. We could massively open up the potential for recruitment, to multiple clinical trials in parallel, making recruitment to clinical trials more efficient, ultimately getting the right therapies to the right patients much faster.”
Professor Matt Brown, Chief Scientific Officer of Genomics England, said: “This promising research further demonstrates the potential of genomics in improving cancer treatment outcomes for many people.
“Rapid advances in genomics are already ushering in the next generation of personalised cancer medicine. Not only can a patient’s genes guide precision treatment decisions that will best serve them, but we could improve how we match people up to clinical trials and help more patients access innovative treatments.
“Research like this highlights the value of the National Genomic Research Library and how understanding our genes can provide a real boost to the way we diagnose and treat disease. It’s all thanks to the contribution of participants and NHS partners in the 100,000 Genomes Project - the consented clinical and genomic data opens the door for incredible research opportunities.”
Professor Nik-Zainal is an Honorary Fellow at Murray Edwards College, Cambridge, and an Honorary Consultant in Clinical Genetics at Cambridge University Hospitals NHS Foundation Trust (CUH).
The study was largely funded by the National Institute for Health and Care Research (NIHR), Breast Cancer Research Foundation, Gray Foundation and Cancer Research UK, with additional support from the NIHR Cambridge Biomedical Research Centre.
The University of Cambridge and Addenbrooke's Charitable Trust (ACT) are fundraising for a new hospital that will transform how we diagnose and treat cancer. Cambridge Cancer Research Hospital, set to be built on the Cambridge Biomedical Campus, will bring together clinical excellence from Addenbrooke’s Hospital and world-leading researchers at the University of Cambridge under one roof in a new NHS hospital. The new hospital will be home to the Precision Breast Cancer Institute, applying the latest genomic advances to tailor treatment for breast cancer patients, maximising treatment efficacy and minimising the risk of debilitating side effects.
Whole genome sequencing offered to breast cancer patients is likely to identify unique genetic features that could either guide immediate treatment or help match patients to clinical trials for over 15,000 women a year, say scientists at the University of Cambridge.
The UK is a genuine world-leader in terms of its ability to do whole genome sequencing in the NHS through the Genomic Medicine Service. We have the potential to make a difference to thousands of patients’ lives every year
The Wyss Institute has developed “organ chips” for the lungs, the intestines, the vagina, the cervix, and the fallopian tubes, among others.Wyss Institute
Health
A condition more common than asthma or diabetes, yet often ignored
Women with heavy menstrual bleeding wait five years on average for care. Wyss technology could change that.
Sy Boles
Harvard Staff Writer
October 7, 2025
4 min read
A condition more common than asthma or diabetes, yet often ignored
Women with heavy menstrual bleeding wait five years on average for care. Wyss technology could change that.
Sy Boles
Harvard Staff Writer
4 min read
Every minute, a woman in the U.S. requires a blood transfusion due to heavy menstrual bleeding, or HMB. One in three women reports having the condition — which can lead to iron deficiency and anemia — and missing an average of 3.6 weeks of work a year, costing the U.S. economy roughly $94 billion annually, according to the nonprofit Wellcome Leap. Patients routinely suffer for up to five years before they get help, despite HMB being more common than asthma or diabetes in reproductive-aged women.
Despite the condition’s ubiquity and seriousness, its causes are poorly understood.
To address this gap, Donald Ingber, founding director of the Wyss Institute and the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, is developing the first human model of HMB. In September, the institute announced it had received funding from Wellcome Leap’s $50 million Missed Vital Sign program to build an organ-on-a-chip model of menstruation, using the platform Ingber first developed at the Wyss in 2010.
The goal? Reduce the time it takes a woman to get effective treatment for HMB more than 10-fold — from an average of five years to five months.
“Women’s health has been ignored for so long — and that goes well beyond reproductive health,” Ingber said. “This technology can break down that inequality and focus on women’s health in a direct way.”
An organ on a chip is effectively a “living, 3D cross-section of a major functional unit of an organ,” explained Ingber, who is also the Hansjörg Wyss Professor of Biologically Inspired Engineering at the School of Engineering and Applied Sciences.
Donald Ingber.
File photo by Niles Singer/Harvard Staff Photographer
The chips allow researchers to strip out the complex, interconnected operations of the human body in order to study one piece of it at a time. His lab has already developed functional organ chips for the lungs, the intestines, the vagina, the cervix, and the fallopian tubes, among others.
Ingber plans to use the new menstruation organ-on-a-chip model to explore a range of potential drivers, including genetic mutations, hypoxia or low-oxygen conditions, microbiomic conditions, and inflammation. But first, he and his team need to create the model.
“I always tell my grad students, you always want to reduce a problem down to one molecule of a problem,” Ingber explained. “What makes an organ is two or more tissues that come together and new functions emerge. … So can we simplify something as complex as organ physiology?”
The chips work by isolating a small piece of organ-level function in a controlled environment. Each chip has two parallel channels separated by a porous membrane. One channel contains living human vasculature lined with endothelial cells — the same type of cells that form the inner walls of capillaries and control the exchange of nutrients, gases, and waste — and in some cases connective tissue cells that form a support for overlying lining cells in the body. The neighboring channel is lined by organ-specific epithelial cells that line different organs, including those that form the reproductive tract. Researchers can introduce various stimuli into either channel and observe how the tissues respond. Side channels can be used to apply suction, stretching and compressing the tissue to mimic movements such as breathing and peristalsis.
“We have the ability to control many different parameters individually,” Ingber said. “Is it exactly like in vivo? No. But that’s what every model is: It’s an approximation, and it’s much better than an animal model.”
The comparison to an animal model is apt. With the exception of a species called the Cairo spiny mouse, mice don’t menstruate. Instead, they have what’s called an estrous cycle, in which the endometrium — the lining of the uterus — is reabsorbed into the body.
This biological difference creates a real challenge for medical research. Mice are widely used in preclinical studies because their biology closely mirrors that of humans in important ways. But when it comes to studying the human menstrual cycle, including disorders like heavy menstrual bleeding, the standard animal models fall short — contributing to disparities in research around women’s health.
As part of its broader efforts to reduce disparities in women’s health, the Wyss Institute houses the Women’s Health Catalyst, a research hub that has led work in areas such as lactation, early detection of ovarian cancer, and better treatment of endometriosis, among other projects. Ingber said the organ-on-a-chip has the potential to revolutionize research into understudied areas of women’s reproductive health.
Albert Einstein famously remarked that, had he not been a physicist, he would have been a musician. He said “I know that most joy in my life has come to me from my violin”; and his wife, Elsa, claimed that she fell in love with him “because he played Mozart so beautifully on the violin”.
Dr Paul Wingfield, Director of Studies in Music at Trinity College, has now helped to identify an 1894 German violin as having belonged to Einstein. On 8 October 2025, the instrument will be auctioned by Domini
Albert Einstein famously remarked that, had he not been a physicist, he would have been a musician. He said “I know that most joy in my life has come to me from my violin”; and his wife, Elsa, claimed that she fell in love with him “because he played Mozart so beautifully on the violin”.
Dr Paul Wingfield, Director of Studies in Music at Trinity College, has now helped to identify an 1894 German violin as having belonged to Einstein. On 8 October 2025, the instrument will be auctioned by Dominic Winter Auctioneers in Cirencester. When the hammer falls, this will be the end of a remarkable 18-month journey for Dr Wingfield.
In March 2024, Wingfield was at the wake of his brother-in-law, Joseph Schwartz, a lifelong Einstein enthusiast and co-author of the 1979 book Einstein for Beginners. A copy was on a table next to a family photograph album containing a 1912 picture of a small boy playing the violin.
Wingfield says: “This juxtaposition sparked in my mind the idea of composing a musical drama, Einstein’s Violin, in which Einstein tells the story of his life, not as a physicist, but as a violinist, to the accompaniment of music for violin and piano.”
“Researching, scripting and composing this show took me six months, by which time I had collected details of everything Einstein is known to have said or written about music, as well as of the violins he owned, and of the concerts in which he played.”
Einstein’s Violin was premiered in April 2025 in Highgate by distinguished actor Harry Meacher, Newnham alumna Leora Cohen on violin and Wingfield himself on piano. After a performance at the Highgate Festival at the end of June, the theatre manager handed Wingfield a message that began ‘I am not mad…’!
“Reading this message proved to be one of the most exciting, if surreal, experiences in my life, Wingfield says. “It was from an auctioneer who had been commissioned to sell a violin that had purportedly belonged to Einstein, and who was asking for my help in checking the instrument’s provenance.”
Einstein bought the violin in Munich in 1894, before he left for Switzerland. He played it throughout the period in which he developed his theory of relativity and received his Nobel prize, buying a new violin in Berlin in 1920. In 1932, just before he fled Nazi Germany for the US, he gave the Munich violin, along with a bicycle and two books, to his friend and fellow Nobel Laureate in Physics, Max von Laue. The books and the bicycle’s saddle will also be sold on 8 October. Twenty years later, von Laue gifted the violin and other items to a friend, Margarete Hommrich, whose great-great-granddaughter is the current owner.
Wingfield says: “I am of course not an expert on nineteenth-century violins but, by a quirk of circumstance, my extensive research into Einstein’s musical life made me the obvious person to investigate the owner’s narrative.”
“Over the summer I have thus been deploying all the historical skills that I have amassed over the years, in examining correspondence and a wide range of other documents, critically appraising witness testimonies, mapping Einstein’s movements over a forty-year period and even analysing his school-age handwriting."
The 1894 violin has an inscription of the name ‘Lina’, which Einstein bestowed on all of his violins.
“Along the way, I have acquired knowledge about topics that were previously a closed book to me, such as nineteenth-century varnish, the precise measurements of Einstein’s hands and even inter-War Belgian customs regulations. I am now as sure as anyone could be that this violin was indeed once owned by Einstein. It would seem that, just occasionally, life does imitate art.”
Paul Wingfield’s research focuses primarily on Czech music and music theory and analysis. He has published on Janáček, Martinů and nineteenth-century sonata form, and he has recently written a chapter on Joseph Joachim’s Violin Concerto no. 1 for a CUP book on the nineteenth-century violin concerto. His musical drama, Einstein’s Violin, received its premiere on 27 April 2025 at Upstairs at the Gatehouse in Highgate. He is currently composing a new musical drama,& Mademoiselle Adagio, about the nineteenth-century violinist, Teresa Milanollo.
Laurent Demanet, MIT professor of applied mathematics, has been appointed co-director of the MIT Center for Computational Science and Engineering (CCSE), effective Sept. 1.Demanet, who holds a joint appointment in the departments of Mathematics and Earth, Atmospheric and Planetary Sciences — where he previously served as director of the Earth Resources Laboratory — succeeds Youssef Marzouk, who is now serving as the associate dean of the MIT Schwarzman College of Computing.Joining co-director Ni
Laurent Demanet, MIT professor of applied mathematics, has been appointed co-director of the MIT Center for Computational Science and Engineering (CCSE), effective Sept. 1.
Demanet, who holds a joint appointment in the departments of Mathematics and Earth, Atmospheric and Planetary Sciences — where he previously served as director of the Earth Resources Laboratory — succeeds Youssef Marzouk, who is now serving as the associate dean of the MIT Schwarzman College of Computing.
Joining co-director Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering, Demanet will help lead CCSE, supporting students, faculty, and researchers while fostering a vibrant community of innovation and discovery in computational science and engineering (CSE).
“Laurent’s ability to translate concepts of computational science and engineering into understandable, real-world applications is an invaluable asset to CCSE. His interdisciplinary experience is a benefit to the visibility and impact of CSE research and education. I look forward to working with him,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.
“I’m pleased to welcome Laurent into his new role as co-director of CCSE. His work greatly supports the cross-cutting methodology at the heart of the computational science and engineering community. I’m excited for CCSE to have a co-director from the School of Science, and eager to see the center continue to broaden its connections across MIT,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, department head of Electrical Engineering and Computer Science, and MathWorks Professor.
Established in 2008, CCSE was incorporated into the MIT Schwarzman College of Computing as one of its core academic units in January 2020. An interdisciplinary research and education center dedicated to pioneering applications of computation, CCSE houses faculty, researchers, and students from a range of MIT schools, such as the schools of Engineering, Science, Architecture and Planning, and the MIT Sloan School of Management, as well as other units of the college.
“I look forward to working with Nicolas and the college leadership on raising the profile of CCSE on campus and globally. We will be pursuing a set of initiatives that span from enhancing the visibility of our research and strengthening our CSE PhD program, to expanding professional education offerings and deepening engagement with our alumni and with industry,” says Demanet.
Demanet’s research lies at the intersection of applied mathematics and scientific computing to visualize the structures beneath Earth’s surface. He also has a strong interest in scientific computing, machine learning, inverse problems, and wave propagation. Through his position as principal investigator of the Imaging and Computing Group, Demanet and his students aim to answer fundamental questions in computational seismic imaging to increase the quality and accuracy of mapping and the projection of changes in Earth’s geological structures. The implications of his work are rooted in environmental monitoring, water resources and geothermal energy, and the understanding of seismic hazards, among others.
He joined the MIT faculty in 2009. He received an Alfred P. Sloan Research Fellowship and the U.S. Air Force Young Investigator Award in 2011, and a CAREER award from the National Science Foundation in 2012. He also held the Class of 1954 Career Development Professorship from 2013 to 2016. Prior to coming to MIT, Demanet held the Szegö Assistant Professorship at Stanford University. He completed his undergraduate studies in mathematical engineering and theoretical physics at Universite de Louvain in Belgium, and earned a PhD in applied and computational mathematics at Caltech, where he was awarded the William P. Carey Prize for best dissertation in the mathematical sciences.
Laurent Demanet will help lead CCSE, supporting students, faculty, and researchers while fostering a vibrant community of innovation and discovery in computational science and engineering.
For Priya Donti, childhood trips to India were more than an opportunity to visit extended family. The biennial journeys activated in her a motivation that continues to shape her research and her teaching.Contrasting her family home in Massachusetts, Donti — now the Silverman Family Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the MIT Laboratory for Information and Decision Systems — was struck by the disp
For Priya Donti, childhood trips to India were more than an opportunity to visit extended family. The biennial journeys activated in her a motivation that continues to shape her research and her teaching.
Contrasting her family home in Massachusetts, Donti — now the Silverman Family Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the MIT Laboratory for Information and Decision Systems — was struck by the disparities in how people live.
“It was very clear to me the extent to which inequity is a rampant issue around the world,” Donti says. “From a young age, I knew that I definitely wanted to address that issue.”
That motivation was further stoked by a high school biology teacher, who focused his class on climate and sustainability.
“We learned that climate change, this huge, important issue, would exacerbate inequity,” Donti says. “That really stuck with me and put a fire in my belly.”
So, when Donti enrolled at Harvey Mudd College, she thought she would direct her energy toward the study of chemistry or materials science to create next-generation solar panels.
Those plans, however, were jilted. Donti “fell in love” with computer science, and then discovered work by researchers in the United Kingdom who were arguing that artificial intelligence and machine learning would be essential to help integrate renewables into power grids.
“It was the first time I’d seen those two interests brought together,” she says. “I got hooked and have been working on that topic ever since.”
Pursuing a PhD at Carnegie Mellon University, Donti was able to design her degree to include computer science and public policy. In her research, she explored the need for fundamental algorithms and tools that could manage, at scale, power grids relying heavily on renewables.
“I wanted to have a hand in developing those algorithms and tool kits by creating new machine learning techniques grounded in computer science,” she says. “But I wanted to make sure that the way I was doing the work was grounded both in the actual energy systems domain and working with people in that domain” to provide what was actually needed.
While Donti was working on her PhD, she co-founded a nonprofit called Climate Change AI. Her objective, she says, was to help the community of people involved in climate and sustainability — “be they computer scientists, academics, practitioners, or policymakers” — to come together and access resources, connection, and education “to help them along that journey.”
“In the climate space,” she says, “you need experts in particular climate change-related sectors, experts in different technical and social science tool kits, problem owners, affected users, policymakers who know the regulations — all of those — to have on-the-ground scalable impact.”
When Donti came to MIT in September 2023, it was not surprising that she was drawn by its initiatives directing the application of computer science toward society’s biggest problems, especially the current threat to the health of the planet.
“We’re really thinking about where technology has a much longer-horizon impact and how technology, society, and policy all have to work together,” Donti says. “Technology is not just one-and-done and monetizable in the context of a year.”
Her work uses deep learning models to incorporate the physics and hard constraints of electric power systems that employ renewables for better forecasting, optimization, and control.
“Machine learning is already really widely used for things like solar power forecasting, which is a prerequisite to managing and balancing power grids,” she says. “My focus is, how do you improve the algorithms for actually balancing power grids in the face of a range of time-varying renewables?”
Among Donti’s breakthroughs is a promising solution for power grid operators to be able to optimize for cost, taking into account the actual physical realities of the grid, rather than relying on approximations. While the solution is not yet deployed, it appears to work 10 times faster, and far more cheaply, than previous technologies, and has attracted the attention of grid operators.
Another technology she is developing works to provide data that can be used in training machine learning systems for power system optimization. In general, much data related to the systems is private, either because it is proprietary or because of security concerns. Donti and her research group are working to create synthetic data and benchmarks that, Donti says, “can help to expose some of the underlying problems” in making power systems more efficient.
“The question is,” Donti says, “can we bring our datasets to a point such that they are just hard enough to drive progress?”
For her efforts, Donti has been awarded the U.S. Department of Energy Computational Science Graduate Fellowship and the NSF Graduate Research Fellowship. She was recognized as part of MIT Technology Review’s 2021 list of “35 Innovators Under 35” and Vox’s 2023 “Future Perfect 50.”
Next spring, Donti will co-teach a class called AI for Climate Action with Sara Beery, EECS assistant professor, whose focus is AI for biodiversity and ecosystems, and Abigail Bodner, an assistant professor in Earth, Atmospheric and Planetary Sciences, holding an MIT Schwarzman College of Computing shared position with EECS.
“We’re all super-excited about it,” Donti says.
Coming to MIT, Donti says, “I knew that there would be an ecosystem of people who really cared, not just about success metrics like publications and citation counts, but about the impact of our work on society.”
“Machine learning is already really widely used for things like solar power forecasting, which is a prerequisite to managing and balancing power grids,” says EECS assistant professor and LIDS PI Priya Donti. “My focus is: How do you improve the algorithms for actually balancing power grids in the face of a range of time-varying renewables?”
The new museum, located at the heart of the Princeton campus between Elm Drive and Chapel Drive along McCosh Walk, roughly doubles the space for the exhibition, conservation, study and interpretation of collections that span the globe.
The new museum, located at the heart of the Princeton campus between Elm Drive and Chapel Drive along McCosh Walk, roughly doubles the space for the exhibition, conservation, study and interpretation of collections that span the globe.
Science & Tech
‘She had a sense of caring for everybody that she encountered.’
Jane Goodall speaking at Harvard after receiving the Roger Tory Peterson medal in 2007.File photo by Stephanie Mitchell/Harvard Staff Photographer
Alvin Powell
Harvard Staff Writer
October 7, 2025
6 min read
Richard Wrangham remembers his teacher and colleague Jane Goodall as a force of science, empathy, and hope
‘She had a sense of caring for everybody that she encountered.’
Jane Goodall speaking at Harvard after receiving the Roger Tory Peterson medal in 2007.
File photo by Stephanie Mitchell/Harvard Staff Photographer
Alvin Powell
Harvard Staff Writer
6 min read
Richard Wrangham remembers his teacher and colleague Jane Goodall as a force of science, empathy, and hope
When the scientist and conservationist Jane Goodall died last week, she left behind a transformed understanding of humankind’s relationship to its closest ape cousins — chimpanzees — as well as a legacy that highlights the implications of that relationship in understanding ourselves.
Richard Wrangham, Harvard’s Ruth Moore Professor of Biological Anthropology, emeritus, and a leading researcher of chimp behavior, worked alongside Goodall first as a student and later as a colleague after he founded the Kibale Forest Chimpanzee Project in Uganda. In this edited conversation, he discusses her wider impact as a teacher — not just of colleagues and fellow scientists, but as an exemplar of hope and empathy.
You did graduate work at the Gombe Stream in Tanzania. What was that like?
It was completely magical. I was working in Gombe, a beautiful place with semi-forested, semi-bush, semi-grassland tumbling down to a shining blue lake. Through the hills and valleys roamed about 60 chimpanzees whose behavior was still little understood. Every day was a thrill.
Jane had arrived in 1960 and did a wonderful job of tracking the chimps, finding them and observing them in the wild, before she got to know them as individuals by staying with them in a small camp area. Starting in ’69, students started following the chimps wherever they went. I arrived in 1970 and joined that group. We discovered, for example, that the chimps had a territory that they defended against their neighbors, which raised all sorts of fascinating questions.
At this time Jane was mostly in Serengeti, studying carnivores. She would come to Gombe for a week at a time, which was a joy because she wanted to know all about what was happening with the chimps. She was interested in everything.
Richard Wrangham.
File photo by Stephanie Mitchell/Harvard Staff Photograher.
How would you describe her as a person?
Incredibly focused. One thing not everybody appreciates is that Jane was far more than a brave young woman habituating chimps in the forest. She was a really good scientist. She combined meticulous observation with a very good sense of theory. We were very lucky that Jane was one of the first people to study chimpanzees in the wild, because she was so good at it.
Is there a single quality or achievement that you think she should be best known for?
One answer is in terms of what unites her interests, and I think that’s empathy: empathy for chimps, empathy for the people living around the chimp sites — you don’t conserve at the expense of other people — empathy for the world, empathy for creatures in the world. She had a sense of caring for everybody that she encountered.
It’s famous that she was entirely focused on chimpanzee behavior and their natural history until 1986. Her focus changed as a result of a Chicago conference called “Understanding Chimpanzees.” She heard the conservation session and realized that chimps were in trouble. She learned more about the cruel ways in which chimps were often held in captivity and from then on, it was conservation and care that mattered to her far more than research. I think it’s part of the reason she was so successful. She was dauntingly single-minded.
Her work and yours have transformed our understanding of humanity’s closest primate cousin. What do we know now that we didn’t know then?
When Jane began her work in the 1960s, there was no reason to think that any of the great apes were more significant for understanding human evolution than any other, but she discovered far more similarity in the behavior of chimpanzees and humans than between humans and what was known then about gorillas and orangutans and bonobos.
DNA research would later show that chimpanzees are more closely related to humans than they are to gorillas. Her behavioral discoveries of tool-making, tool-using, food-sharing, hunting, warlike behavior, as well as the astonishing intimacy of the mother-infant relationships — things that unite humans and chimps — anticipated the DNA revolution.
People started taking far more seriously the notion that there is an underlying biological influence on human behavior. She also helped convince scientists in general that chimps had far more similarity to us in their emotional lives, in their capacity for feeling, and in their ability to think, than any wild animal had been shown to have.
Richard W. Wrangham (left) presenting Goodall with the Roger Tory Peterson medal in 2007.
File photo by Stephanie Mitchell/Harvard Staff Photographer
Goodall receives the 2003 Global Environmental Citizen Award from Eric Chivian, who was the director of the Center for Health and the Global Environment at Harvard.
Harvard file photo
Has that work changed our treatment of chimpanzees?
It’s had enormous effects. There were hundreds of chimpanzees in medical facilities in the middle of the last century that now have been released into sanctuaries, where they are nurtured relatively comfortably into their old age without being forced into tiny solitary cages and being the subject of stressful medical experiments.
Some of the coverage has characterized her as an optimist, which seems a bit surprising, given humanity’s assault on nature. Do you agree that she was?
She absolutely felt that it was important for her to be an optimist because people need hope. People need to be motivated to do good things: good for themselves, good for the planet, good for their communities, good for nature. The books that she wrote in the last two decades with “hope” in the title reflect a very conscious determination to keep hope alive. What did she really think? There’s no question the difficulties got to her, but at the same time, I think that she genuinely felt that there are reasons for hope. One of the most important sources of hope for her was the indomitable human spirit. If you can remind people that if you try hard enough you can do anything — her mother’s message — then good things will happen.
Do you share her optimism?
I am not an optimist about maintaining anything like the level of nature that we have now. I see the human species in an unstoppable takeover of the great majority of nature in the world. I think that the hope for the future of our wild places is to focus on the big places that will remain the source for as many animals and plants as we can keep alive. Kibale forest is the biggest forest in Uganda and it’s not very big. It’s only about 250 square miles.
The hope is that we can keep countries thinking that these special places are worth saving. I worry that if we spend too much time focused on saving every little forest, we’ll lose the big picture. But I can feel Jane looking over my shoulder saying, “You shouldn’t say that.” She would say, “You just passionately fight for every little forest, and maybe, out of that fight, more energy comes to save the big ones too.”
Science & Tech
A real butterfly effect
Andrew Berry. Stephanie Mitchell/Harvard Staff Photographer
Kermit Pattison
Harvard Staff Writer
October 7, 2025
5 min read
Saga that winds through centuries, continents results in newly recognized species being named in honor of Harvard biologist
This is a tale of scholarly obsession. It involves a burning ship, a jungle-exploring Victorian na
Saga that winds through centuries, continents results in newly recognized species being named in honor of Harvard biologist
This is a tale of scholarly obsession. It involves a burning ship, a jungle-exploring Victorian naturalist, a Harvard biologist, and a rare butterfly.
Evolutionary biologist Andrew Berry is a scholar of Alfred Russel Wallace, a pioneering evolutionist overshadowed by Charles Darwin.
Over the years he has collected memorabilia that connect him to his scientific hero, including a rare first edition of travelogues, an autographed letter, and an original 19th-century map.
Now Berry can claim an even rarer link: A previously-unknown butterfly species collected by Wallace in the Amazon — and forgotten in museum drawers for more than 150 years — has been designated as a new species named in his honor.
“I’m absolutely thrilled, sad though it is to be so pathetically vain about having a little brown butterfly named after you,” said Berry as he sat in his book-lined office beside a colorful model of his namesake butterfly, Euptychia andrewberryi. “Seeing my excitement, you might well think, ‘Get a life, man!’”
How that tropical butterfly landed on the desk of a wry English biologist is a scientific saga that winds through centuries across Brazil, the Atlantic Ocean, London, the Harvard campus — and began for Berry with a writing assignment.
Alfred Russel Wallace.
Photo illustration by Liz Zonarich/Harvard Staff
Wallace, who was born in Wales in 1823, proposed his own theory of evolution by natural selection at the same time as Charles Darwin in 1858. For a variety of reasons, Darwin became credited as the father of evolutionary biology, and Wallace went down in history as an also-ran.
Yet Wallace was a trailblazing naturalist in his own right, credited with many other major scientific achievements. He fathered the field of biogeography and recognized a difference in fauna that distinguished Asia from Australia, New Guinea, and the Pacific islands — a boundary now called the “Wallace Line.”
In 1848, Wallace and fellow entomologist Henry Walter Bates sailed for Brazil to explore the Amazon and collect insects and other species.
Four years later, Wallace was returning to England when his ship caught fire during the Atlantic crossing, and nearly all his collections were lost. Wallace and his fellow survivors spent 10 days in lifeboats before being rescued.
Fortunately, Wallace had shipped back a few crates of specimens before the ill-fated voyage. Among them were some butterflies from Brazil.
He also spent a decade researching a monograph on the genus Euptychia, a group of butterflies from South America and Central America. Poring over collections around the globe, he found the butterflies collected by Wallace and Bates at the Museum of Natural History in London.
Andrew Berry explores the butterfly collections.
Photos by Stephanie Mitchell/Harvard Staff Photographer
Life-sized model of the colorful butterfly Euptychia andrewberryi, otherwise known as “Andrew Berry’s Black-eyed Satyr” on his desk.
Butterflies in the collections at the MCZ.
“A Narrative of Travels on the Amazon and Rio Negro” by Alfred R. Wallace.
A number of those specimens were classified as the pitch brown black-eyed satyr (Euptychia picea). But Nakahara recognized that five were anatomically distinct and belonged to a new species, one previously unknown to science.
He had a good candidate for a new name — an amusing character who had given him a deeper appreciation of Wallace.
Enter Berry, who is now assistant head tutor of integrative biology and lecturer on organismic and evolutionary biology.
Berry has worn many hats as a biologist. He chased giant rats through the jungles of New Guinea, studied the genetics of fruit flies, and (bear in mind this is not a strict ranking of priorities) mentored generations of undergraduate biology students. He co-authored a book with the Nobel laureate geneticist James Watson and has written extensively about the history of science.
Near the turn of the millennium, he was fatefully assigned to write about Wallace for the London Review of Books and found himself enraptured. He went on to write numerous essays on Wallace and publish a collection of his writings.
“You can’t read Wallace and not fall in love with him, partly because he’s such a fantastic writer,” said Berry. “Then there’s the underdog piece — here’s the guy who co-discovered the theory that we trumpet, and yet he basically dropped off the map.”
“You can’t read Wallace and not fall in love with him, partly because he’s such a fantastic writer. Then there’s the underdog piece — here’s the guy who co-discovered the theory that we trumpet, and yet he basically dropped off the map.”
Andrew Berry
Nakahara felt that such a scholar deserved his own species.
When he first arrived in Cambridge, Nakahara stayed at the house of his faculty sponsor, Naomi Pierce, curator of lepidoptera in the MCZ and Sidney A. and John H. Hessel Professor of Biology, who also happens to be married to Berry.
Nakahara discovered that breakfast in the household involved generous servings of Wallace discussion.
“His contribution is bringing Wallace to people’s attention, because Wallace is this person who’s famous for not being famous,” said Nakahara. “Andrew is very good at explaining the importance of these early naturalists and Victorian biologists, and he is very engaging.”
Having a butterfly named in his honor — better yet, one collected by Wallace and Bates — makes Berry’s heart flutter. One student, Amanda Dynak ’24, made a model of his eponymous species as a gift.
“Andrew is over the moon about it,” said Pierce with a laugh. “I just think it’s a fantastic story, because Andrew is passionate about Alfred Russel Wallace, and has been for a while — in fact, long before Wallace became popular.”
But Berry cannot claim any special distinction in his house. His wife already has three species named after her, including zombie-ant fungus, a brain-hijacking parasite.
Marcyliena Morgan at the Hiphop Archive & Research Institute in 2019.Harvard file photos
Campus & Community
She pioneered study of hip-hop as high art
Harvard renames first-of-its-kind archive after founder Marcyliena Morgan, who died recently at age 75
Christy DeSmith
Harvard Staff Writer
October 7, 2025
6 min read
Today, hip-hop is the world’s most popular music genre by most co
Harvard renames first-of-its-kind archive after founder Marcyliena Morgan, who died recently at age 75
Christy DeSmith
Harvard Staff Writer
6 min read
Today, hip-hop is the world’s most popular music genre by most commercial measures. But that wasn’t the case three decades ago when linguistic anthropologist Marcyliena H. Morgan started pitching Harvard administrators on her big idea: a first-of-its-kind hip-hop archive and academic research center.
“She wanted to give deeper legitimacy to studying this globally influential style of creative production,” recalled her husband of 28 years, Lawrence D. Bobo, W.E.B. Du Bois Professor of the Social Sciences.
Morgan, founding director of Harvard’s Hiphop Archive & Research Institute, died Sept. 28 due to complications from Alzheimer’s disease. The emerita professor of social sciences and of African and African American Studies was 75.
“She took a holistic view of the hip-hop community. When she did events, there were scholars, there were artists.”
Lawrence D. Bobo
To honor that legacy, Hopi Hoekstra, Edgerley Family Dean of the Faculty of Arts and Sciences, recently approved a new name for the gallery-like space at the Hutchins Center: the Marcyliena H. Morgan Hip Hop Archive & Research Institute.
According to Gates, word of the rechristening was relayed to Morgan a few weeks ago. “I sent a letter that was read at her bedside,” he said.
Morgan, who grew up with five sisters on Chicago’s South Side, earned advanced degrees in linguistics at the University of Essex and University of Pennsylvania. In the early 1990s while teaching a course on urban speech communities at the University of California, Los Angeles, she noticed students submitting essays on innovative patterns of speech used by Ice Cube and other West Coast rappers.
“That drew her attention to this enormous creativity with language on the one hand, and this very powerful youth culture on the other,” said Bobo, a fellow UCLA faculty member at the time. “She ended up dedicating much of the latter part of her career to studying hip-hop culture while trying to preserve and make more broadly understandable its material and cultural production.”
Morgan began amassing a vast collection of hip-hop albums, magazines, fashion, and concert posters while still at UCLA in the 1990s. As Gates recalled, her vision for a museum-quality archive was first articulated to him around 1996.
“At the time, no one could have envisioned that hip-hop would become the lingua franca of youth musical culture worldwide,” Gates observed. “The equivalent would be if W.E.B. Du Bois or Alain Locke in 1925 had thought to document the evolution of this new musical form called jazz.”
After joining Bobo at Harvard in 2002, Morgan wasted no time establishing the archive in the African and African American Studies Department. The Hiphop Archive was briefly relocated to Stanford University, where Morgan and Bobo were on the faculty from 2005 to 2007. Its current space at the Hutchins Center opened in 2008, shortly after the couple’s return to Harvard.
“There were so many people, including many artists, who visited the archive over the years and instantly burst into tears,” Bobo said. “They regarded hip-hop as central to their creative and personal development and were profoundly moved to see it treated with such respect and seriousness.”
At a symposium two years ago, colleagues, friends, and former students marked Morgan’s retirement by celebrating her approach to the discipline. Her 2009 title “The Real Hiphop: Battling for Knowledge, Power, and Respect in the LA Underground” was praised for examining innovative uses of language at a time when most scholars saw hip-hop through the lens of political science or sociology.
Morgan smiles as her husband, Lawrence D. Bobo, applauds during the 2023 celebration marking her retirement.
File photo by Niles Singer/Harvard Staff Photographer
“These young people were writing in a form of musical poetry about their feelings, their hopes, their own wisdom, or the wisdom they had received from others,” said colleague and friend Evelyn Brooks Higginbotham, Victor S. Thomas Professor of History and of African and African American Studies. “Marcy writes about it as this intergenerational dialogue. They use the art form not only to articulate the world they’re in, but the world they want in the future.”
Also applauded at the 2023 tribute were Morgan’s respect for the genre’s activist elements (such as hip-hop artists playing a lead role in promoting safe sex during the HIV/AIDS crisis) as well as the events she convened under Harvard’s imprimatur, including a 2003 symposium examining the artistic contributions of Tupac Shakur.
“She took a holistic view of the hip-hop community,” Bobo said. “When she did events, there were scholars, there were artists. There were journalists and other popular voices who routinely judged the quality of what’s produced.”
“She’s one of the first people, if not the first person, to give validation to the intellectual importance of this new form,” Higginbotham said. “You now have courses on hip-hop not just at Harvard but all over the country. You have Kendrick Lamar winning the Pulitzer Prize in 2018. You have the Smithsonian’s National Museum of American History announcing a multiyear initiative to collect elements of hip-hop art and culture in 2006.”
“She’s one of the first people, if not the first person, to give validation to the intellectual importance of this new form.”
Evelyn Brooks Higginbotham
Morgan also worked at fostering personal connections. The 2023 symposium on Morgan’s career drew former students and mentees from every corner of the U.S. and as far away as Italy. “There are people of color all over this planet who have Ph.D.’s because Marcyliena Morgan mentored us and believed in us,” Bennett told the audience.
Morgan, famous for her coconut cake, grew up in a family that surrounded itself with music, cooking, ideas, and friendship. She carried on that tradition with Bobo in their Cambridge home. Lavish meals were prepared for teaching assistants. A dozen or so guests came for Thanksgiving dinner every year. Two days later came the second serving, when 40 to 50 people arrived for an annual holiday Morgan liked to call “It’s Not Over Yet.”
“Marcy, in the pit of her soul, was a community-builder, a community-maker,” Bobo said. “She saw food and talking and critical thought as enormously important ingredients for a meaningful and enjoyable life.”
The name for the new museum's American art section recognizes the late John Wilmerding, who established at Princeton one of the country's leading programs for the study of American art. A gift from The Anschutz Foundation names the five galleries within it.
The name for the new museum's American art section recognizes the late John Wilmerding, who established at Princeton one of the country's leading programs for the study of American art. A gift from The Anschutz Foundation names the five galleries within it.
The MyCoast New York app has already provided forecasters and emergency managers with a new understanding of flooding around the state, as sea levels rise and storms intensify.
The MyCoast New York app has already provided forecasters and emergency managers with a new understanding of flooding around the state, as sea levels rise and storms intensify.
Sir John was a visionary in the field of developmental biology, whose pioneering work on nuclear transfer in frogs addressed one of the most fundamental questions in biology: whether genetic information is retained or lost during development.
His work paved the way for ground-breaking advances in biomedical research, from stem cell biology to mouse genetics and IVF.
His discovery that mature adult cells can be reprogrammed to an embryonic stem cell state (known as pluripotency) was recognised
Sir John was a visionary in the field of developmental biology, whose pioneering work on nuclear transfer in frogs addressed one of the most fundamental questions in biology: whether genetic information is retained or lost during development.
His work paved the way for ground-breaking advances in biomedical research, from stem cell biology to mouse genetics and IVF.
Professor Ben Simons, Director of the Gurdon Institute at the University of Cambridge said: “As well as being a towering figure in developmental and stem cell biology, through his dedication to science, his affection for colleagues and his humility, Sir John Gurdon was an inspiration to us all.”
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, said: “I am deeply saddened to hear of the passing of Sir John Gurdon. He was a giant within the scientific community, a truly inspirational figure who rightfully earned a Nobel Prize in 2012 for his pioneering work in stem cell research. Sir John will be greatly missed by everyone here in Cambridge, but he leaves behind him an outstanding legacy for which we are extremely grateful.”
Professor Jon Simons, Head of the School of Biological Sciences at the University of Cambridge, said: "Sir John Gurdon was, and will continue to be, one of the most inspirational scientists in our community, and in the world. As well as outstanding contributions to developmental biology, John was also a dedicated colleague and mentor, who was deeply committed to interdisciplinary collaboration. He will be greatly missed."
Born in 1933, Sir John was educated at Eton and Christ Church, Oxford, where he gained First Class Honours in Zoology. Following appointments in Oxford and the United States, Sir John joined the Medical Research Council Laboratory of Molecular Biology in Cambridge in 1972 and later became the John Humphrey Plummer Professor of Cell Biology in the Department of Zoology. He served as Master of Magdalene College, Cambridge from 1995 to 2002.
In 1991 he founded the Wellcome/CRUK Institute for Cell Biology and Cancer, later renamed the Gurdon Institute at the University of Cambridge, together with Ron Laskey. Their vision was to bring together expertise in two research areas: developmental biology and cancer biology. Sir John’s personal commitment to research - he continued to perform experiments at the bench until his 90s - was matched only by his dedication and support of his colleagues.
The University remembers Sir John as an inspiring scientist, insightful colleague, mentor, teacher and leader, whose legacy will live on through the generations of scientists trained in his lab, and extends its heartfelt condolences to Lady Gurdon and the family.
A federal stop-work order has threatened the progress a Weill Cornell Medicine researcher has made in understanding a lethal and treatment-resistant form of prostate cancer.
A federal stop-work order has threatened the progress a Weill Cornell Medicine researcher has made in understanding a lethal and treatment-resistant form of prostate cancer.
Clarke, who is Professor Emeritus of the Graduate School at the University of California at Berkeley, completed both his undergraduate and PhD studies at Cambridge. He was born in Cambridge and attended the Perse School on an academic scholarship before coming to Christ’s College as an undergraduate to read Natural Sciences.
Clarke moved to Darwin College for his PhD, which he completed in 1968 at the Cavendish Laboratory. His research is based on the theory, design and applications of supercon
Clarke, who is Professor Emeritus of the Graduate School at the University of California at Berkeley, completed both his undergraduate and PhD studies at Cambridge. He was born in Cambridge and attended the Perse School on an academic scholarship before coming to Christ’s College as an undergraduate to read Natural Sciences.
Clarke moved to Darwin College for his PhD, which he completed in 1968 at the Cavendish Laboratory. His research is based on the theory, design and applications of superconducting quantum interference devices (SQUIDs), which are ultrasensitive detectors of magnetic flux.
“John Clarke, together with Michel Devoret and John Martinis, pushed the door open for today’s quantum technologies based on superconducting qubits, putting fundamental quantum phenomena at work in real devices,” said Professor Mete Atatüre, Head of the Cavendish Laboratory. “Brian Josephson – another Cavendish Nobel Laureate – was first to propose the concept of a new quantum phase arising from tunnelling between two superconductors. John Clarke's PhD work in the Cavendish Laboratory demonstrated the operational principle of what we call a superconductor-normal-superconductor (SNS) Josephson Junction - essentially the heart of all superconducting qubits today. Devoret and Martinis spearheaded the translation of this fundamental quantum physics concept into what superconducting quantum computing is today. I’m of course thrilled with today’s well-deserved announcement.”
A major question in physics is the maximum size of a system that can demonstrate quantum mechanical effects. Clarke, Devoret and Martinis conducted experiments with an electrical circuit in which they demonstrated both quantum mechanical tunnelling and quantised energy levels in a system big enough to be held in the hand.
Quantum mechanics allows a particle to move straight through a barrier, using a process called tunnelling. As soon as large numbers of particles are involved, quantum mechanical effects usually become insignificant. The laureates’ experiments demonstrated that quantum mechanical properties can be made concrete on a macroscopic scale.
In 1984 and 1985, Clarke, Devoret and Martinis conducted a series of experiments with an electronic circuit built of superconductors, components that can conduct a current with no electrical resistance. In the circuit, the superconducting components were separated by a thin layer of non-conductive material, a setup known as a Josephson junction. By refining and measuring all the various properties of their circuit, they were able to control and explore the phenomena that arose when they passed a current through it. Together, the charged particles moving through the superconductor comprised a system that behaved as if they were a single particle that filled the entire circuit.
This macroscopic particle-like system is initially in a state in which current flows without any voltage. The system is trapped in this state, as if behind a barrier that it cannot cross. In the experiment the system shows its quantum character by managing to escape the zero-voltage state through tunnelling. The system’s changed state is detected through the appearance of a voltage.
The laureates could also demonstrate that the system behaves in the manner predicted by quantum mechanics – it is quantised, meaning that it only absorbs or emits specific amounts of energy.
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, said: “Congratulations to Cambridge alumnus Professor Clarke on being jointly awarded this year’s Nobel Prize in Physics for his research into quantum mechanical tunnelling. Not only did he grow up in this incredible city, but he studied from his undergraduate degree through to his PhD here.
“Professor Clarke joins 125 other noteworthy Cambridge alumni and researchers who have been awarded Nobel Prizes, highlighting our University’s remarkable impact within the research and education sectors.”
Clarke has continued his active affiliation with Cambridge over the years, returning several times, including 1972 when he was elected to a Fellowship at Christ’s, 1989 when he was a Visiting Fellow at Clare Hall, and 1998 when he was elected a By-Fellow of Churchill College. He was awarded the ScD from the University in 2003, and was elected an Honorary Fellow of Darwin College in 2023.
University of Cambridge alumnus Professor John Clarke has been awarded the 2025 Nobel Prize in Physics, jointly with Michel H Devoret and John M Martinis, for their work revealing quantum physics in action.
The demise of the rules-based international order is jeopardising Switzerland’s success model, says security expert Daniel Möckli. He believes the country faces difficult positioning issues.
The demise of the rules-based international order is jeopardising Switzerland’s success model, says security expert Daniel Möckli. He believes the country faces difficult positioning issues.
Tokamaks are machines that are meant to hold and harness the power of the sun. These fusion machines use powerful magnets to contain a plasma hotter than the sun’s core and push the plasma’s atoms to fuse and release energy. If tokamaks can operate safely and efficiently, the machines could one day provide clean and limitless fusion energy.Today, there are a number of experimental tokamaks in operation around the world, with more underway. Most are small-scale research machines built to investig
Tokamaks are machines that are meant to hold and harness the power of the sun. These fusion machines use powerful magnets to contain a plasma hotter than the sun’s core and push the plasma’s atoms to fuse and release energy. If tokamaks can operate safely and efficiently, the machines could one day provide clean and limitless fusion energy.
Today, there are a number of experimental tokamaks in operation around the world, with more underway. Most are small-scale research machines built to investigate how the devices can spin up plasma and harness its energy. One of the challenges that tokamaks face is how to safely and reliably turn off a plasma current that is circulating at speeds of up to 100 kilometers per second, at temperatures of over 100 million degrees Celsius.
Such “rampdowns” are necessary when a plasma becomes unstable. To prevent the plasma from further disrupting and potentially damaging the device’s interior, operators ramp down the plasma current. But occasionally the rampdown itself can destabilize the plasma. In some machines, rampdowns have caused scrapes and scarring to the tokamak’s interior — minor damage that still requires considerable time and resources to repair.
Now, scientists at MIT have developed a method to predict how plasma in a tokamak will behave during a rampdown. The team combined machine-learning tools with a physics-based model of plasma dynamics to simulate a plasma’s behavior and any instabilities that may arise as the plasma is ramped down and turned off. The researchers trained and tested the new model on plasma data from an experimental tokamak in Switzerland. They found the method quickly learned how plasma would evolve as it was tuned down in different ways. What’s more, the method achieved a high level of accuracy using a relatively small amount of data. This training efficiency is promising, given that each experimental run of a tokamak is expensive and quality data is limited as a result.
The new model, which the team highlights this week in an open-access Nature Communications paper, could improve the safety and reliability of future fusion power plants.
“For fusion to be a useful energy source it’s going to have to be reliable,” says lead author Allen Wang, a graduate student in aeronautics and astronautics and a member of the Disruption Group at MIT’s Plasma Science and Fusion Center (PSFC). “To be reliable, we need to get good at managing our plasmas.”
The study’s MIT co-authors include PSFC Principal Research Scientist and Disruptions Group leader Cristina Rea, and members of the Laboratory for Information and Decision Systems (LIDS) Oswin So, Charles Dawson, and Professor Chuchu Fan, along with Mark (Dan) Boyer of Commonwealth Fusion Systems and collaborators from the Swiss Plasma Center in Switzerland.
“A delicate balance”
Tokamaks are experimental fusion devices that were first built in the Soviet Union in the 1950s. The device gets its name from a Russian acronym that translates to a “toroidal chamber with magnetic coils.” Just as its name describes, a tokamak is toroidal, or donut-shaped, and uses powerful magnets to contain and spin up a gas to temperatures and energies high enough that atoms in the resulting plasma can fuse and release energy.
Today, tokamak experiments are relatively low-energy in scale, with few approaching the size and output needed to generate safe, reliable, usable energy. Disruptions in experimental, low-energy tokamaks are generally not an issue. But as fusion machines scale up to grid-scale dimensions, controlling much higher-energy plasmas at all phases will be paramount to maintaining a machine’s safe and efficient operation.
“Uncontrolled plasma terminations, even during rampdown, can generate intense heat fluxes damaging the internal walls,” Wang notes. “Quite often, especially with the high-performance plasmas, rampdowns actually can push the plasma closer to some instability limits. So, it’s a delicate balance. And there’s a lot of focus now on how to manage instabilities so that we can routinely and reliably take these plasmas and safely power them down. And there are relatively few studies done on how to do that well.”
Bringing down the pulse
Wang and his colleagues developed a model to predict how a plasma will behave during tokamak rampdown. While they could have simply applied machine-learning tools such as a neural network to learn signs of instabilities in plasma data, “you would need an ungodly amount of data” for such tools to discern the very subtle and ephemeral changes in extremely high-temperature, high-energy plasmas, Wang says.
Instead, the researchers paired a neural network with an existing model that simulates plasma dynamics according to the fundamental rules of physics. With this combination of machine learning and a physics-based plasma simulation, the team found that only a couple hundred pulses at low performance, and a small handful of pulses at high performance, were sufficient to train and validate the new model.
The data they used for the new study came from the TCV, the Swiss “variable configuration tokamak” operated by the Swiss Plasma Center at EPFL (the Swiss Federal Institute of Technology Lausanne). The TCV is a small experimental fusion experimental device that is used for research purposes, often as test bed for next-generation device solutions. Wang used the data from several hundred TCV plasma pulses that included properties of the plasma such as its temperature and energies during each pulse’s ramp-up, run, and ramp-down. He trained the new model on this data, then tested it and found it was able to accurately predict the plasma’s evolution given the initial conditions of a particular tokamak run.
The researchers also developed an algorithm to translate the model’s predictions into practical “trajectories,” or plasma-managing instructions that a tokamak controller can automatically carry out to for instance adjust the magnets or temperature maintain the plasma’s stability. They implemented the algorithm on several TCV runs and found that it produced trajectories that safely ramped down a plasma pulse, in some cases faster and without disruptions compared to runs without the new method.
“At some point the plasma will always go away, but we call it a disruption when the plasma goes away at high energy. Here, we ramped the energy down to nothing,” Wang notes. “We did it a number of times. And we did things much better across the board. So, we had statistical confidence that we made things better.”
The work was supported in part by Commonwealth Fusion Systems (CFS), an MIT spinout that intends to build the world’s first compact, grid-scale fusion power plant. The company is developing a demo tokamak, SPARC, designed to produce net-energy plasma, meaning that it should generate more energy than it takes to heat up the plasma. Wang and his colleagues are working with CFS on ways that the new prediction model and tools like it can better predict plasma behavior and prevent costly disruptions to enable safe and reliable fusion power.
“We’re trying to tackle the science questions to make fusion routinely useful,” Wang says. “What we’ve done here is the start of what is still a long journey. But I think we’ve made some nice progress.”
Additional support for the research came from the framework of the EUROfusion Consortium, via the Euratom Research and Training Program and funded by the Swiss State Secretariat for Education, Research, and Innovation.
MIT engineers have developed a printable aluminum alloy that can withstand high temperatures and is five times stronger than traditionally manufactured aluminum.The new printable metal is made from a mix of aluminum and other elements that the team identified using a combination of simulations and machine learning, which significantly pruned the number of possible combinations of materials to search through. While traditional methods would require simulating over 1 million possible combinations
MIT engineers have developed a printable aluminum alloy that can withstand high temperatures and is five times stronger than traditionally manufactured aluminum.
The new printable metal is made from a mix of aluminum and other elements that the team identified using a combination of simulations and machine learning, which significantly pruned the number of possible combinations of materials to search through. While traditional methods would require simulating over 1 million possible combinations of materials, the team’s new machine learning-based approach needed only to evaluate 40 possible compositions before identifying an ideal mix for a high-strength, printable aluminum alloy.
When they printed the alloy and tested the resulting material, the team confirmed that, as predicted, the aluminum alloy was as strong as the strongest aluminum alloys that are manufactured today using traditional casting methods.
The researchers envision that the new printable aluminum could be made into stronger, more lightweight and temperature-resistant products, such as fan blades in jet engines. Fan blades are traditionally cast from titanium — a material that is more than 50 percent heavier and up to 10 times costlier than aluminum — or made from advanced composites.
“If we can use lighter, high-strength material, this would save a considerable amount of energy for the transportation industry,” says Mohadeseh Taheri-Mousavi, who led the work as a postdoc at MIT and is now an assistant professor at Carnegie Mellon University.
“Because 3D printing can produce complex geometries, save material, and enable unique designs, we see this printable alloy as something that could also be used in advanced vacuum pumps, high-end automobiles, and cooling devices for data centers,” adds John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering at MIT.
Hart and Taheri-Mousavi provide details on the new printable aluminum design in a paper published in the journal Advanced Materials. The paper’s MIT co-authors include Michael Xu, Clay Houser, Shaolou Wei, James LeBeau, and Greg Olson, along with Florian Hengsbach and Mirko Schaper of Paderborn University in Germany, and Zhaoxuan Ge and Benjamin Glaser of Carnegie Mellon University.
Micro-sizing
The new work grew out of an MIT class that Taheri-Mousavi took in 2020, which was taught by Greg Olson, professor of the practice in the Department of Materials Science and Engineering. As part of the class, students learned to use computational simulations to design high-performance alloys. Alloys are materials that are made from a mix of different elements, the combination of which imparts exceptional strength and other unique properties to the material as a whole.
Olson challenged the class to design an aluminum alloy that would be stronger than the strongest printable aluminum alloy designed to date. As with most materials, the strength of aluminum depends in large part on its microstructure: The smaller and more densely packed its microscopic constituents, or “precipitates,” the stronger the alloy would be.
With this in mind, the class used computer simulations to methodically combine aluminum with various types and concentrations of elements, to simulate and predict the resulting alloy’s strength. However, the exercise failed to produce a stronger result. At the end of the class, Taheri-Mousavi wondered: Could machine learning do better?
“At some point, there are a lot of things that contribute nonlinearly to a material’s properties, and you are lost,” Taheri-Mousavi says. “With machine-learning tools, they can point you to where you need to focus, and tell you for example, these two elements are controlling this feature. It lets you explore the design space more efficiently.”
Layer by layer
In the new study, Taheri-Mousavi continued where Olson’s class left off, this time looking to identify a stronger recipe for aluminum alloy. This time, she used machine-learning techniques designed to efficiently comb through data such as the properties of elements, to identify key connections and correlations that should lead to a more desirable outcome or product.
She found that, using just 40 compositions mixing aluminum with different elements, their machine-learning approach quickly homed in on a recipe for an aluminum alloy with higher volume fraction of small precipitates, and therefore higher strength, than what the previous studies identified. The alloy’s strength was even higher than what they could identify after simulating over 1 million possibilities without using machine learning.
To physically produce this new strong, small-precipitate alloy, the team realized 3D printing would be the way to go instead of traditional metal casting, in which molten liquid aluminum is poured into a mold and is left to cool and harden. The longer this cooling time is, the more likely the individual precipitate is to grow.
The researchers showed that 3D printing, broadly also known as additive manufacturing, can be a faster way to cool and solidify the aluminum alloy. Specifically, they considered laser bed powder fusion (LBPF) — a technique by which a powder is deposited, layer by layer, on a surface in a desired pattern and then quickly melted by a laser that traces over the pattern. The melted pattern is thin enough that it solidfies quickly before another layer is deposited and similarly “printed.” The team found that LBPF’s inherently rapid cooling and solidification enabled the small-precipitate, high-strength aluminum alloy that their machine learning method predicted.
“Sometimes we have to think about how to get a material to be compatible with 3D printing,” says study co-author John Hart. “Here, 3D printing opens a new door because of the unique characteristics of the process — particularly, the fast cooling rate. Very rapid freezing of the alloy after it’s melted by the laser creates this special set of properties.”
Putting their idea into practice, the researchers ordered a formulation of printable powder, based on their new aluminum alloy recipe. They sent the powder — a mix of aluminum and five other elements — to collaborators in Germany, who printed small samples of the alloy using their in-house LPBF system. The samples were then sent to MIT where the team ran multiple tests to measure the alloy’s strength and image the samples’ microstructure.
Their results confirmed the predictions made by their initial machine learning search: The printed alloy was five times stronger than a casted counterpart and 50 percent stronger than alloys designed using conventional simulations without machine learning. The new alloy’s microstructure also consisted of a higher volume fraction of small precipitates, and was stable at high temperatures of up to 400 degrees Celsius — a very high temperature for aluminum alloys.
The researchers are applying similar machine-learning techniques to further optimize other properties of the alloy.
“Our methodology opens new doors for anyone who wants to do 3D printing alloy design,” Taheri-Mousavi says. “My dream is that one day, passengers looking out their airplane window will see fan blades of engines made from our aluminum alloys.”
This work was carried out, in part, using MIT.nano’s characterization facilities.
A new 3-D-printed aluminum alloy is stronger than traditional aluminum, due to a key recipe that, when printed, produces aluminum (illustrated in brown) with nanometer scale precipitates (in light blue). The precipitates are arranged in regular, nano-scale patterns (blue and green in circle inset) that impart exceptional strength to the printed alloy.
Research led by the University of Cambridge has found the first clear evidence that the ‘good’ gut bacteria Bifidobacterium breve in pregnant mothers regulates the placenta’s production of hormones critical for a healthy pregnancy.
In a study in mice, the researchers compared the placentas of mice with no gut bacteria to those of mice with Bifidobacterium breve in their gut during pregnancy.
Pregnant mice without Bifidobacterium breve in their gut had a higher rate of complications including f
Research led by the University of Cambridge has found the first clear evidence that the ‘good’ gut bacteria Bifidobacterium breve in pregnant mothers regulates the placenta’s production of hormones critical for a healthy pregnancy.
In a study in mice, the researchers compared the placentas of mice with no gut bacteria to those of mice with Bifidobacterium breve in their gut during pregnancy.
Pregnant mice without Bifidobacterium breve in their gut had a higher rate of complications including fetal growth restriction and fetal low blood sugar, and increased fetal loss.
This gut bacteria seems to play a crucial role in prompting the placenta to produce pregnancy hormones that allow the mother’s body to support the pregnancy.
This is the first time scientists have found a link between the gut microbiome and the placenta.
The researchers say this paves the way for testing the mother’s gut microbiome to identify pregnancy complications like gestational diabetes, preeclampsia or miscarriage early - and then manipulating it with probiotics to improve the chances of a healthy baby.
“Our results open up an entirely new way to assess the health of a pregnant mother and her developing fetus by looking at the mother’s gut microbiome,” said Dr Jorge Lopez Tello, first author of the report, who carried out the work while at the University of Cambridge’s Department of Physiology, Development and Neuroscience.
He added: “Everybody ignores the placenta - after nine months of pregnancy it just gets thrown in the bin. But now we understand more about how it works, in the future pregnancy complications like gestational diabetes, preeclampsia, miscarriage and stillbirth might be prevented simply by adjusting the mother’s gut microbes to improve the function of the placenta.”
The placenta is a crucial organ during pregnancy that connects mother to fetus, and provides the nutrients, oxygen and hormones essential for healthy development of the baby.
Remote control
In the study, over 150 biological processes in the placenta - involving over 400 different proteins - were found to be different in mice with, and without, Bifidobacterium breve in their gut.
The mice with Bifidobacterium breve in their gut lost fewer of their pregnancies. Their placentas were better at absorbing and transporting nutrients, like amino acids and lactate, from mother to fetus - vital for fetal growth. Their placentas also produced more of the hormones important for pregnancy, such as prolactins and pregnancy-specific glycoproteins.
By studying mice, whose diet, activity and gut microbiome could be tightly controlled, the scientists can be sure their findings are not caused by other factors. Using mice allowed them to pinpoint the importance of Bifidobacterium breve - a finding that is also relevant to human pregnancies.
The scientists say more research is needed to understand how these ‘good’ bacteria work within the human body’s full gut microbiome, and whether they could be manipulated in the gut without any negative effects.
Bifidobacterium breve occurs naturally in the human gut microbiome, but in pregnant women the levels of this ‘good’ bacteria can be altered by stress or obesity. It is widely available as a supplement in probiotic drinks and tablets.
Healthier pregnancies
The babies of up to 10% of first-time mothers have low birth weight or fetal growth restriction. If a baby doesn’t grow properly in the womb, there is an increased risk of conditions like cerebral palsy in infants, and anxiety, depression, autism, and schizophrenia in later life.
“Our research reveals a whole new layer of information about how pregnancy works, and will help us find new interventions that can improve the chances of a healthy pregnancy for mother and baby,” said Professor Amanda Sferruzzi-Perri in the University of Cambridge’s Department of Physiology, Development and Neuroscience and St John’s College, senior author of the report.
“It’s exciting to think that beneficial microbes like Bifidobacterium - which naturally support gut and immune health - could be harnessed during pregnancy to improve outcomes. Using something like a probiotic offers a promising alternative to traditional therapeutics, potentially reducing risks while enhancing wellbeing in mother and baby,” said Professor Lindsay Hall at the University of Birmingham’s College of Medicine and Health, who was also involved in the work.
When Bifidobacterium breve, widely available in probiotic drinks, is present in the gut of pregnant females it boosts the placenta’s production of pregnancy hormones to reduce the likelihood of complications like preeclampsia and miscarriage.
Our results open up an entirely new way to assess the health of a pregnant mother and her developing fetus by looking at the mother’s gut microbiome.
In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McG
In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.
When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McGovern Institute PI Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions.
“People can hear, understand, and prioritize multiple sounds around them that flow on a moment-to-moment basis,” explains Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology at MIT. “This study reveals the specific brain mechanisms that successfully process simultaneous sounds on a moment-to-moment basis and promote attention to the most important sounds. It also shows how musical training alters that processing in the mind and brain, offering insight into how experience shapes the way we listen and pay attention.”
The research team, which also included senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their open-access findings Sept. 17 in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.
Overcoming challenges
Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”
Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.
Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower-pitch sound and the 43-Hertz activity corresponds specifically to the higher-pitch sound,” Manting explains. “It is very clean and very clear.”
When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher-pitched or the lower-pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.
Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.
To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune — even, in some cases, when the notes of the distracting tune played at the exact same time.
Top-down versus bottom-up attention
What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus — the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention — but more so in some people than in others.
“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.
Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.
She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.
“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” says Cassia Low Manting, a recent MIT postdoc working in the labs of John Gabrieli and Dimitrios Pantazis.
In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McG
In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.
When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McGovern Institute PI Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions.
“People can hear, understand, and prioritize multiple sounds around them that flow on a moment-to-moment basis,” explains Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology at MIT. “This study reveals the specific brain mechanisms that successfully process simultaneous sounds on a moment-to-moment basis and promote attention to the most important sounds. It also shows how musical training alters that processing in the mind and brain, offering insight into how experience shapes the way we listen and pay attention.”
The research team, which also included senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their open-access findings Sept. 17 in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.
Overcoming challenges
Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”
Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.
Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower-pitch sound and the 43-Hertz activity corresponds specifically to the higher-pitch sound,” Manting explains. “It is very clean and very clear.”
When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher-pitched or the lower-pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.
Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.
To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune — even, in some cases, when the notes of the distracting tune played at the exact same time.
Top-down versus bottom-up attention
What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus — the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention — but more so in some people than in others.
“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.
Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.
She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.
“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” says Cassia Low Manting, a recent MIT postdoc working in the labs of John Gabrieli and Dimitrios Pantazis.
Nation & World
Rebutting ‘myths of inequality’
Phil Gramm (left) with Larry Summers. Photos by Veasey Conway/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
October 6, 2025
4 min read
Former veteran legislator, economist Phil Gramm argues unequal distribution of wealth inevitable; policy to engineer level playing field is mistake
There’s nothing inherently wrong w
Photos by Veasey Conway/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
4 min read
Former veteran legislator, economist Phil Gramm argues unequal distribution of wealth inevitable; policy to engineer level playing field is mistake
There’s nothing inherently wrong with unequal distribution of income or wealth in the nation, according to Phil Gramm, former veteran U.S. legislator and economist.
The problems arise when the federal government tries to engineer a level playing field, whether through blunt corporate anti-trust regulations or through assistance programs for the poor, said the Texas Republican, who spent about three terms each in the House and Senate, during a talk last Tuesday at Harvard Kennedy School.
“People have different aptitudes and interests, different levels of energy, and inequality is a natural occurrence of competition. I’m not upset by inequality,” Gramm told Harvard economist Larry Summers during a conversation about economic policy.
The two worked together in the late 1990s when Gramm chaired the Senate Banking Committee and Summers was named U.S. Treasury secretary by then-President Bill Clinton.
Gramm, who holds a Ph.D. from the University of Georgia and taught at Texas A&M, discussed what he called “myths” about key periods in U.S. economic history, from the Industrial Revolution to the 2008-2009 financial crisis, the subject of his 2022 book, “The Myth of American Inequality: How Government Biases Policy Debate.”
The debate over income and wealth inequality has been shaped by progressive critics, such as French economist Thomas Piketty, who often overstate the gap between rich and poor by only considering earned income and ignoring the value of benefits most lower-income individuals receive from the government, like Medicaid or food stamps, Gramm argued.
“So, when you see that the poverty family has an income of $29,000, that’s not counting a refundable tax credit because they don’t take taxes into account; that’s not counting food stamps, where you get a debit card and you go the grocery store; that’s not counting rent subsidies, not counting Medicaid,” Gramm said. “If you count all those things, the ratio of the top quintile to the bottom is not 16, 17 to 1, but 4 to 1.”
Summers, the Charles W. Eliot University Professor and the Frank and Denie Weil Director of the Mossavar-Rahmani Center for Business and Government at HKS, agreed that many official statistics underestimate the income of lower-income people so they appear in worse shape than they actually are, which some then use as the basis for an argument for more assistance programs.
Even when those programs successfully reduce the number of poor people, government data often doesn’t reflect that, he added.
Theda Skocpol, Ph.D. ’75, Victor S. Thomas Professor of Government and Sociology, pushed back on Gramm’s assertion.
Theda Skocpol.
Even if what he says is accurate, she said, “Inequalities of wealth and income have still skyrocketed to an extraordinary degree since the late 1970s.”
The big winners during the Industrial Revolution, tycoons such as Andrew Carnegie, greatly benefited from federal subsidies, Skocpol said.
“A lot of those fortunes were made out of government contracts given through patronage capitalism during and after the Civil War,” a situation that is again rampant today to “an extraordinary degree,” she said.
“A lot of the fortunes that are being made by the people who are attending those dinners in the White House are being made through favored government contracts, and we can’t be sure that’s going to maximize efficiency at all, let alone opportunity to innovate in a vibrant capitalist economy.”
Gramm also addressed the decision by the Trump administration to have the government take equity stakes in private companies like Intel. He said he adamantly opposed the practice because of the negative potentialramifications to the economy.
“It’s ripe for corruption and special treatment, and again you’ve got to look when you start this kind of thing, even if your intentions are good and even if your first selectees are good, you got to accept the fact that it’s going to be” done repeatedly with no end in sight, he said.
Gramm argued for a simpler tax system with fewer deductions and lower rates as part of a debate on the merits of taxing income right away while allowing capital gains to be shielded until assets are sold.
The veteran lawmaker was asked whether the record wealth of Silicon Valley tech entrepreneurs like Elon Musk, who has amassed a nearly $1 trillion fortune, ought to be somehow regulated.
He demurred, saying Musk created that $1 trillion in value for shareholders, and his products provide benefit to consumers.
“He did more good making it than he’ll ever do giving it away,” he said.
Up to 20 early-stage research projects will receive funding from this year’s Global Hubs Research Seed Grants, connecting Cornell faculty with researchers at Global Hubs partner universities.
Up to 20 early-stage research projects will receive funding from this year’s Global Hubs Research Seed Grants, connecting Cornell faculty with researchers at Global Hubs partner universities.
The program recognizes and supports outstanding scholars primed to make important contributions in their fields. The 2025 cohort span the humanities, engineering, sciences and social sciences.
The program recognizes and supports outstanding scholars primed to make important contributions in their fields. The 2025 cohort span the humanities, engineering, sciences and social sciences.
Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026. “Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean o
Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026.
“Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “He is also a dedicated educator, beloved by undergraduates and graduates alike. I know the department will be in good hands as we double down on our commitment to world-leading research and education in the face of financial headwinds.”
Shoulders succeeds Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, who has been at the helm since October 2019.
“I am tremendously grateful to Troy for his leadership the past six years, building a fantastic community here in our department. We face challenges, but also many exciting opportunities, as a department in the years to come,” says Shoulders. “One thing is certain: Chemistry innovations are critical to solving pressing global challenges. Through the research that we do and the scientists we train, our department has a huge role to play in shaping the future.”
Shoulders studies how cells fold proteins, and he develops and applies novel protein engineering techniques to challenges in biotechnology. His work across chemistry and biochemistry fields including proteostasis, extracellular matrix biology, virology, evolution, and synthetic biology is yielding not just important insights into topics like how cells build healthy tissues and how proteins evolve, but also influencing approaches to disease therapy and biotechnology development.
“Matt is an outstanding researcher whose work touches on fundamental questions about how the cell machinery directs the synthesis and folding of proteins. His discoveries about how that machinery breaks down as a result of mutations or in response to stress has a fundamental impact on how we think about and treat human diseases,” says Van Voorhis.
In one part of Matt's current research program, he is studying how protein folding systems in cells — known as chaperones — shape the evolution of their clients. Amongst other discoveries, his lab has shown that viral pathogens hijack human chaperones to enable their rapid evolution and escape from host immunity. In related recent work, they have discovered that these same chaperones can promote access to malignancy-driving mutations in tumors. Beyond fundamental insights into evolutionary biology, these findings hold potential to open new therapeutic strategies to target cancer and viral infections.
“Matt’s ability to see both the details and the big picture makes him an outstanding researcher and a natural leader for the department,” says Timothy Swager, the John D. MacArthur Professor of Chemistry. “MIT Chemistry can only benefit from his dedication to understanding and addressing the parts and the whole.”
Shoulders also leads a food security project through the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Shoulders, along with MIT Research Scientist Robbie Wilson, assembled an interdisciplinary team based at MIT to enhance climate resilience in agriculture by improving one of the most inefficient aspects of photosynthesis, the carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk, high-reward MIT Grand Challenge project in 2023, and it has received further support from federal research agencies and the Grantham Foundation for the Protection of the Environment.
“Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists, creating a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team is making a concerted effort using state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”
In addition to his research contributions, Shoulders has taught multiple classes for Course V, including 5.54 (Advances in Chemical Biology) and 5.111 (Principles of Chemical Science), along with a number of other key chemistry classes. His contributions to a 5.111 “bootcamp” through the MITx platform served to address gaps in the classroom curriculum by providing online tools to help undergraduate students better grasp the material in the chemistry General Institute Requirement (GIR). His development of Guided Learning Demonstrations to support first-year chemistry courses at MIT has helped bring the lab to the GIR, and also contributed to the popularity of 5.111 courses offered regularly via MITx.
“I have had the pleasure of teaching with Matt on several occasions, and he is a fantastic educator. He is an innovator both inside and outside the classroom and has an unwavering commitment to his students’ success,” says Van Voorhis of Shoulders, who was named a 2022 MacVicar Faculty Fellow, and who received a Committed to Caring award through the Office of Graduate Education.
Shoulders also founded the MIT Homeschool Internship Program for Science and Technology, which brings high school students to campus for paid summer research experiences in labs across the Institute.
He is a founding member of the Department of Chemistry’s Quality of Life Committee and chair for the last six years, helping to improve all aspects of opportunity, professional development, and experience in the department: “countless changes that have helped make MIT a better place for all,” as Van Voorhis notes, including creating a peer mentoring program for graduate students and establishing universal graduate student exit interviews to collect data for department-wide assessment and improvement.
At the Institute level, Shoulders has served on the Committee on Graduate Programs, Committee on Sexual Misconduct Prevention and Response (in which he co-chaired the provost's working group on the Faculty and Staff Sexual Misconduct Survey), and the Committee on Assessment of Biohazards and Embryonic Stem Cell Research Oversight, among other roles.
Shoulders graduated summa cum laude from Virginia Tech in 2004, earning a BS in chemistry with a minor in biochemistry. He earned a PhD in chemistry at the University of Wisconsin at Madison in 2009 under Professor Ronald Raines. Following an American Cancer Society Postdoctoral Fellowship at Scripps Research Institute, working with professors Jeffery Kelly and Luke Wiseman, Shoulders joined the MIT Department of Chemistry faculty as an assistant professor in 2012. Shoulders also serves as an associate member of the Broad Institute and an investigator at the Center for Musculoskeletal Research at Massachusetts General Hospital.
Among his many awards, Shoulders has received a NIH Director's New Innovator Award under the NIH High-Risk, High-Reward Research Program; an NSF CAREER Award; an American Cancer Society Research Scholar Award; the Camille Dreyfus Teacher-Scholar Award; and most recently the Ono Pharma Foundation Breakthrough Science Award.
The Institute is being funded by a gift of £5 million by alumnus Humphrey Battcock (1973), a Foundation Fellow at Downing since 2013 and donor of transformational gifts to support the College and its students.
Led by Dr David Halpern as Director, the Institute will primarily focus on supporting and bringing together researchers and policymakers from other institutions, rather than running its own separate primary research. An early priority will be to fund and host ‘policy retreats’ at Downing
The Institute is being funded by a gift of £5 million by alumnus Humphrey Battcock (1973), a Foundation Fellow at Downing since 2013 and donor of transformational gifts to support the College and its students.
Led by Dr David Halpern as Director, the Institute will primarily focus on supporting and bringing together researchers and policymakers from other institutions, rather than running its own separate primary research. An early priority will be to fund and host ‘policy retreats’ at Downing on key social and economic challenges, with a particular focus on issues early in the policy cycle, before fixed policy positions have emerged.
The visit began at the College, where His Royal Highness toured the Women’s Art Collection – the largest collection of modern and contemporary art by women in Europe. He was welcomed by Dr Rachel Polonsky, Acting President and Chair of the Art Committee, before meeting civic guests including Councillor Dinah Pounds (Mayor of Cambridge), Councillor Peter McDonald (Chair of Cambridgeshire County Council), and Mr Paul Bristow (Mayor of the Cambridgeshire and Peterborough Combined Authority).
Once
The visit began at the College, where His Royal Highness toured the Women’s Art Collection – the largest collection of modern and contemporary art by women in Europe. He was welcomed by Dr Rachel Polonsky, Acting President and Chair of the Art Committee, before meeting civic guests including Councillor Dinah Pounds (Mayor of Cambridge), Councillor Peter McDonald (Chair of Cambridgeshire County Council), and Mr Paul Bristow (Mayor of the Cambridgeshire and Peterborough Combined Authority).
Once inside the College, the Duke was conducted by the Curator, Mrs Harriet Loffler, on a private tour of the Collection, which includes more than 600 works by artists such as Dame Paula Rego DBE, Maggi Hambling CBE, Lubaina Himid CBE and Judy Chicago. The Collection, founded in 1986, is displayed against the background of the College’s distinctive modernist architecture and has been accredited by Arts Council England since 2018.
His Royal Highness also met senior members of the University and of Murray Edwards, including the Vice-Chancellor of the University, Professor Deborah Prentice, Vice-President Professor Miranda Griffin, and Dr Victoria Harvey, Senior Tutor at the College and President of the Cambridge University Real Tennis Club.
Following the tour, the Duke attended a reception where he met students who have completed the Duke of Edinburgh’s Award and then signed the visitors’ book.
Dr Polonsky said: "This place of learning is governed under a Royal Charter granted by Her Majesty Queen Elizabeth. At the start of this academic year, in which we mark the 60th anniversary of our wonderful buildings and the 40th anniversary of the Women's Art Collection, it is a special privilege to welcome His Royal Highness."
In the afternoon the visit turned from art to sport at the Cambridge University Real Tennis Club at Grange Road. In his role as Patron of the Tennis & Rackets Association (T&RA), His Royal Highness officially launched the inaugural National Real Tennis Open Event – a nationwide initiative bringing together all 24 UK real tennis clubs for the first time. The event, which ran from 2–5 October, was designed to introduce the historic game to new audiences through open days, exhibition matches and taster sessions.
At the Club the Duke was welcomed by the President, Dr Harvey, and met representatives from the T&RA, including Chairman Mr Richard Compton-Burnett, CEO Mr Chris Davies and National Event Coordinator Mr Nick Brodie. He also met students, alumni and professionals taking part in the tournament before enjoying the opening Division 1 match.
During his visit the Duke, an alumnus of Jesus College, displayed his personal enthusiasm for a sport that has its roots in the Court of King Henry VIII and is the forerunner of modern lawn tennis.
He also met Professor Bhaskar Vira, Pro-Vice-Chancellor for Education and Environmental Sustainability and Chair of the University Sports Committee, and Mr Mark Brian, Director of Sport, alongside sponsors and supporters of the Club. The afternoon also afforded opportunity to speak with players and alumni about the future of the game and the importance of attracting new players.
The day provided a rare opportunity to showcase two distinct aspects of Cambridge’s cultural life: a pioneering collection that celebrates women’s contributions to the visual arts, and an historic sport being introduced to the upcoming generations.
The Women’s Art Collection at Murray Edwards is open to everyone, every day, between 10am and 6pm and is free to visit. There is no need to pre-book a visit: www.murrayedwards.cam.ac.uk/womens-art-collection
Membership of the Cambridge University Real Tennis Club is open to all: men, women, young and old, school children, students, locals, visitors (town or gown) and not just to members of the University. Guests are also welcome. The club has recently undergone a large development programme which has provided a new court, club room and a new pros room. Visit: www.curtc.net
Murray Edwards College and the Cambridge University Real Tennis Club were honoured to welcome His Royal Highness The Duke of Edinburgh KG KT GCVO on Thursday 2 October for a day of engagements that highlighted the College’s unique artistic offering and the Club’s sporting traditions.
Brunkow received her Ph.D. from Princeton in 1991 in molecular biology. She shares the award with Fred Ramsdell and Shimon Sakaguchi for discoveries about how the immune system is kept in check.
Brunkow received her Ph.D. from Princeton in 1991 in molecular biology. She shares the award with Fred Ramsdell and Shimon Sakaguchi for discoveries about how the immune system is kept in check.
MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions
MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.
The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.
In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions by attaching them to a ligand. This approach allowed them to create borenium-containing films, powders, and crystals, all of which emit and absorb light in the red and near-infrared range.
That is important because near-IR light is easier to see when imaging structures deep within tissues, which could allow for clearer images of tumors and other structures in the body.
“One of the reasons why we focus on red to near-IR is because those types of dyes penetrate the body and tissue much better than light in the UV and visible range. Stability and brightness of those red dyes are the challenges that we tried to overcome in this study,” says Robert Gilliard, the Novartis Professor of Chemistry at MIT and the senior author of the study.
MIT research scientist Chun-Lin Deng is the lead author of the paper. Other authors include Bi Youan (Eric) Tra PhD ’25, former visiting graduate student Xibao Zhang, and graduate student Chonghe Zhang.
Stabilized borenium
Most fluorescent imaging relies on dyes that emit blue or green light. Those imaging agents work well in cells, but they are not as useful in tissue because low levels of blue and green fluorescence produced by the body interfere with the signal. Blue and green light also scatters in tissue, limiting how deeply it can penetrate.
Imaging agents that emit red fluorescence can produce clearer images, but most red dyes are inherently unstable and don’t produce a bright signal, because of their low quantum yields (the ratio of fluorescent photons emitted per photon of light is absorbed). For many red dyes, the quantum yield is only about 1 percent.
Among the molecules that can emit near-infrared light are borenium cations —positively charged ions containing an atom of boron attached to three other atoms.
When these molecules were first discovered in the mid-1980s, they were considered “laboratory curiosities,” Gilliard says. These molecules were so unstable that they had to be handled in a sealed container called a glovebox to protect them from exposure to air, which can lead them to break down.
Later, chemists realized they could make these ions more stable by attaching them to molecules called ligands. Working with these more stable ions, Gillliard’s lab discovered in 2019 that they had some unusual properties: Namely, they could respond to changes in temperature by emitting different colors of light.
However, at that point, “there was a substantial problem in that they were still too reactive to be handled in open air,” Gilliard says.
His lab began working on new ways to further stabilize them using ligands known as carbodicarbenes (CDCs), which they reported in a 2022 study. Due to this stabilization, the compounds can now be studied and handled without using a glovebox. They are also resistant to being broken down by light, unlike many previous borenium-based compounds.
In the new study, Gilliard began experimenting with the anions (negatively charged ions) that are a part of the CDC-borenium compounds. Interactions between these anions and the borenium cation generate a phenomenon known as exciton coupling, the researchers discovered. This coupling, they found, shifted the molecules’ emission and absorption properties toward the infrared end of the color spectrum. These molecules also generated a high quantum yield, allowing them to shine more brightly.
“Not only are we in the correct region, but the efficiency of the molecules is also very suitable,” Gilliard says. “We’re up to percentages in the thirties for the quantum yields in the red region, which is considered to be high for that region of the electromagnetic spectrum.”
Potential applications
The researchers also showed that they could convert their borenium-containing compounds into several different states, including solid crystals, films, powders, and colloidal suspensions.
For biomedical imaging, Gilliard envisions that these borenium-containing materials could be encapsulated in polymers, allowing them to be injected into the body to use as an imaging dye. As a first step, his lab plans to work with researchers in the chemistry department at MIT and at the Broad Institute of MIT and Harvard to explore the potential of imaging these materials within cells.
Because of their temperature responsiveness, these materials could also be deployed as temperature sensors, for example, to monitor whether drugs or vaccines have been exposed to temperatures that are too high or low during shipping.
“For any type of application where temperature tracking is important, these types of ‘molecular thermometers’ can be very useful,” Gilliard says.
If incorporated into thin films, these molecules could also be useful as organic light-emitting diodes (OLEDs), particularly in new types of materials such as flexible screens, Gilliard says.
“The very high quantum yields achieved in the near-IR, combined with the excellent environmental stability, make this class of compounds extremely interesting for biological applications,” says Frieder Jaekle, a professor of chemistry at Rutgers University, who was not involved in the study. “Besides the obvious utility in bioimaging, the strong and tunable near-IR emission also makes these new fluorophores very appealing as smart materials for anticounterfeiting, sensors, switches, and advanced optoelectronic devices.”
In addition to exploring possible applications for these dyes, the researchers are now working on extending their color emission further into the near-infrared region, which they hope to achieve by incorporating additional boron atoms. Those extra boron atoms could make the molecules less stable, so the researchers are also working on new types of carbodicarbenes to help stabilize them.
The research was funded by the Arnold and Mabel Beckman Foundation and the National Institutes of Health.
MIT chemists have created a fluorescent, boron-containing molecule that is stable when exposed to air and can emit light in the red and near-infrared range. The dye can be made into crystals (shown in these images), films, or powders. The images at top were taken in ambient light and the images at bottom in UV light.
The University of Cambridge study of 615 state schools in England found that while socio-economic background does not have a significant impact on students’ desire to study languages, poorer students are disproportionately concentrated in schools that give languages lower priority. This significantly reduces their chances of studying a language after the age of 14.
The research identified a seven percentage point gap between the proportion of disadvantaged students at schools where languages we
The University of Cambridge study of 615 state schools in England found that while socio-economic background does not have a significant impact on students’ desire to study languages, poorer students are disproportionately concentrated in schools that give languages lower priority. This significantly reduces their chances of studying a language after the age of 14.
The research identified a seven percentage point gap between the proportion of disadvantaged students at schools where languages were optional at GCSE, and at those where they were considered ‘core’. Uptake at these schools diverged dramatically, with the proportion of students studying a GCSE language varying by more than 50 percentage points.
These findings suggest that disadvantaged students have been worst affected by the national decline in language study since 2004, when GCSE languages ceased to be compulsory. In the academic year 2023/4, just 45.7% of eligible students in England took a language GCSE. By contrast, 97.9% of upper secondary students in the EU study at least one foreign language.
The study also shows that if schools offer a wider choice of languages, their GCSE language scores tend to be better overall. For every additional language offered at GCSE, schools’ average scores for GCSE languages rose by almost a quarter of a grade.
The research, published in The Language Learning Journal, was undertaken by Dr Karen Forbes, Associate Professor in Second Language Education at the Faculty of Education, University of Cambridge.
Forbes said it raised concerns about widening inequalities in language learning. “It seems obvious, but surely all children should have the same opportunity to learn a language,” she said. “In practice, for less wealthy students these subjects are often de-emphasised. If this is not addressed, the national decline in language learning will continue and probably accelerate.”
Language learning in England is compulsory from ages seven to 14, with most pupils studying French, Spanish or German. Thereafter, schools decide whether to treat languages as ‘core’ or optional. In addition, some offer languages through a specific pathway tied to the English Baccalaureate (EBacc): a performance measure based on the number of pupils taking GCSEs in what the Government considers important subjects, which includes languages.
The Cambridge study explored how schools’ policies on languages – treating them as ‘core’, attaching them to an EBacc pathway, or leaving them fully optional – affects uptake at GCSE and students’ attainment.
It also considered other factors that might influence uptake and grades, including students’ prior attainment (measured using test scores at Key Stage 2), the number of “disadvantaged” students, and the number of students who use English as an additional language (EAL), meaning they speak a different language at home.
Out of the 615 schools, 19.2% treated languages as ‘core’, 29.6% offered an EBacc pathway, and 51.2% positioned languages as completely optional. The vast majority of GCSE students took French, Spanish, or German; but some studied Chinese, Italian, Urdu, Hebrew, Arabic, Japanese or Bengali.
Disadvantaged students were more likely to attend schools where languages were optional, accounting for almost 29% of all students, compared with just 21.3% in schools where languages were core. The proportion in EBacc pathway schools was 25.65%: almost identical to the national average.
Critically, the effect of school language policies on uptake were stark. In schools where languages were core, 82.6% of students studied a language to GCSE. The figure sank to 52.7% in EBacc pathway schools and just 31.9% in schools where languages are optional. As the study shows, these are the schools that disproportionately serve less affluent communities.
Even after accounting for prior attainment and EAL pupils, school policy remained the strongest predictor of students’ likelihood of studying a language to GCSE. In contrast, disadvantage had no significant effect. In other words, given the chance, poorer students are just as likely to continue language study past age 14 as their peers.
The research also considered the effects that increasing language uptake has on results. On average, each percentage increase in uptake was linked to a 0.019 point drop, or about one-fiftieth of a grade, in the school’s average GCSE grade across all language subjects.
This effect was more than outweighed by the benefits of offering a wider choice of languages, however. For each additional GCSE language on the timetable, the average grade rose by 0.234 points – almost a quarter of a grade.
Forbes said that how schools position languages in the curriculum sends important signals to students. “When schools frame languages as useful and important the students pick up on this,” she said. “Offering a wider range of languages also gives them a choice, and they are more likely to be motivated if they are studying a language they have actively chosen.”
While the EBacc has not reversed the national decline in language learning, the findings provide some tentative evidence that it has a positive effect in some schools, bearing in mind the 20 percentage point difference between uptake in EBacc pathway schools and schools where languages are purely optional.
“Personally, I would love to see languages reestablished as core subjects at GCSE across all schools – this would signal its importance and create more equitable opportunities for students,” Forbes said. “In the absence of that, something is better than nothing, and national-level accountability measures for languages like the EBacc do seem to influence both schools and students. Broadening choice – rather than narrowing it – is key to reducing inequalities between students, and to raising both participation and attainment.”
Students from less wealthy backgrounds are more likely to attend schools where learning a language to GCSE is treated as optional – and not necessarily strongly encouraged – new research shows.
How does he manage to balance two great passions? Bachelor’s degree student Milan Kühn opted to study mechanical engineering, but still devotes a lot of time to music. He explains why in the video.
How does he manage to balance two great passions? Bachelor’s degree student Milan Kühn opted to study mechanical engineering, but still devotes a lot of time to music. He explains why in the video.
For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don’t always want to bring a sledgehammer to a knife fight.Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and McMaster University have identified a new compound that takes a more targeted a
For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don’t always want to bring a sledgehammer to a knife fight.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and McMaster University have identified a new compound that takes a more targeted approach. The molecule, called enterololin, suppresses a group of bacteria linked to Crohn’s disease flare-ups while leaving the rest of the microbiome largely intact. Using a generative AI model, the team mapped how the compound works, a process that usually takes years but was accelerated here to just months.
“This discovery speaks to a central challenge in antibiotic development,” says Jon Stokes, senior author of a new paper on the work, assistant professor of biochemistry and biomedical sciences at McMaster, and research affiliate at MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health. “The problem isn’t finding molecules that kill bacteria in a dish — we’ve been able to do that for a long time. A major hurdle is figuring out what those molecules actually do inside bacteria. Without that detailed understanding, you can’t develop these early-stage antibiotics into safe and effective therapies for patients.”
Enterololin is a stride toward precision antibiotics: treatments designed to knock out only the bacteria causing trouble. In mouse models of Crohn’s-like inflammation, the drug zeroed in on Escherichia coli, a gut-dwelling bacterium that can worsen flares, while leaving most other microbial residents untouched. Mice given enterololin recovered faster and maintained a healthier microbiome than those treated with vancomycin, a common antibiotic.
Pinning down a drug’s mechanism of action, the molecular target it binds inside bacterial cells, normally requires years of painstaking experiments. Stokes’ lab discovered enterololin using a high-throughput screening approach, but determining its target would have been the bottleneck. Here, the team turned to DiffDock, a generative AI model developed at CSAIL by MIT PhD student Gabriele Corso and MIT Professor Regina Barzilay.
DiffDock was designed to predict how small molecules fit into the binding pockets of proteins, a notoriously difficult problem in structural biology. Traditional docking algorithms search through possible orientations using scoring rules, often producing noisy results. DiffDock instead frames docking as a probabilistic reasoning problem: a diffusion model iteratively refines guesses until it converges on the most likely binding mode.
“In just a couple of minutes, the model predicted that enterololin binds to a protein complex called LolCDE, which is essential for transporting lipoproteins in certain bacteria,” says Barzilay, who also co-leads the Jameel Clinic. “That was a very concrete lead — one that could guide experiments, rather than replace them.”
Stokes’ group then put that prediction to the test. Using DiffDock predictions as an experimental GPS, they first evolved enterololin-resistant mutants of E. coli in the lab, which revealed that changes in the mutant’s DNA mapped to lolCDE, precisely where DiffDock had predicted enterololin to bind. They also performed RNA sequencing to see which bacterial genes switched on or off when exposed to the drug, as well as used CRISPR to selectively knock down expression of the expected target. These laboratory experiments all revealed disruptions in pathways tied to lipoprotein transport, exactly what DiffDock had predicted.
“When you see the computational model and the wet-lab data pointing to the same mechanism, that’s when you start to believe you’ve figured something out,” says Stokes.
For Barzilay, the project highlights a shift in how AI is used in the life sciences. “A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” she says. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”
That distinction matters because mechanism-of-action studies are often a major rate-limiting step in drug development. Traditional approaches can take 18 months to two years, or more, and cost millions of dollars. In this case, the MIT–McMaster team cut the timeline to about six months, at a fraction of the cost.
Enterololin is still in the early stages of development, but translation is already underway. Stokes’ spinout company, Stoked Bio, has licensed the compound and is optimizing its properties for potential human use. Early work is also exploring derivatives of the molecule against other resistant pathogens, such as Klebsiella pneumoniae. If all goes well, clinical trials could begin within the next few years.
The researchers also see broader implications. Narrow-spectrum antibiotics have long been sought as a way to treat infections without collateral damage to the microbiome, but they have been difficult to discover and validate. AI tools like DiffDock could make that process more practical, rapidly enabling a new generation of targeted antimicrobials.
For patients with Crohn’s and other inflammatory bowel conditions, the prospect of a drug that reduces symptoms without destabilizing the microbiome could mean a meaningful improvement in quality of life. And in the bigger picture, precision antibiotics may help tackle the growing threat of antimicrobial resistance.
“What excites me is not just this compound, but the idea that we can start thinking about the mechanism of action elucidation as something we can do more quickly, with the right combination of AI, human intuition, and laboratory experiments,” says Stokes. “That has the potential to change how we approach drug discovery for many diseases, not just Crohn’s.”
“One of the greatest challenges to our health is the increase of antimicrobial-resistant bacteria that evade even our best antibiotics,” adds Yves Brun, professor at the University of Montreal and distinguished professor emeritus at Indiana University Bloomington, who wasn’t involved in the paper. “AI is becoming an important tool in our fight against these bacteria. This study uses a powerful and elegant combination of AI methods to determine the mechanism of action of a new antibiotic candidate, an important step in its potential development as a therapeutic.”
Corso, Barzilay, and Stokes wrote the paper with McMaster researchers Denise B. Catacutan, Vian Tran, Jeremie Alexander, Yeganeh Yousefi, Megan Tu, Stewart McLellan, and Dominique Tertigas, and professors Jakob Magolan, Michael Surette, Eric Brown, and Brian Coombes. Their research was supported, in part, by the Weston Family Foundation; the David Braley Centre for Antibiotic Discovery; the Canadian Institutes of Health Research; the Natural Sciences and Engineering Research Council of Canada; M. and M. Heersink; Canadian Institutes for Health Research; Ontario Graduate Scholarship Award; the Jameel Clinic; and the U.S. Defense Threat Reduction Agency Discovery of Medical Countermeasures Against New and Emerging Threats program.
The researchers posted sequencing data in public repositories and released the DiffDock-L code openly on GitHub.
By using AI to sift through more than 10,000 molecules, researchers found enterololin (inset), a compound that blocks a key pathway in harmful gut bacteria and, in mice with IBD, eased infection without disturbing the rest of the microbiome.
For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don’t always want to bring a sledgehammer to a knife fight.Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and McMaster University have identified a new compound that takes a more targeted a
For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don’t always want to bring a sledgehammer to a knife fight.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and McMaster University have identified a new compound that takes a more targeted approach. The molecule, called enterololin, suppresses a group of bacteria linked to Crohn’s disease flare-ups while leaving the rest of the microbiome largely intact. Using a generative AI model, the team mapped how the compound works, a process that usually takes years but was accelerated here to just months.
“This discovery speaks to a central challenge in antibiotic development,” says Jon Stokes, senior author of a new paper on the work, assistant professor of biochemistry and biomedical sciences at McMaster, and research affiliate at MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health. “The problem isn’t finding molecules that kill bacteria in a dish — we’ve been able to do that for a long time. A major hurdle is figuring out what those molecules actually do inside bacteria. Without that detailed understanding, you can’t develop these early-stage antibiotics into safe and effective therapies for patients.”
Enterololin is a stride toward precision antibiotics: treatments designed to knock out only the bacteria causing trouble. In mouse models of Crohn’s-like inflammation, the drug zeroed in on Escherichia coli, a gut-dwelling bacterium that can worsen flares, while leaving most other microbial residents untouched. Mice given enterololin recovered faster and maintained a healthier microbiome than those treated with vancomycin, a common antibiotic.
Pinning down a drug’s mechanism of action, the molecular target it binds inside bacterial cells, normally requires years of painstaking experiments. Stokes’ lab discovered enterololin using a high-throughput screening approach, but determining its target would have been the bottleneck. Here, the team turned to DiffDock, a generative AI model developed at CSAIL by MIT PhD student Gabriele Corso and MIT Professor Regina Barzilay.
DiffDock was designed to predict how small molecules fit into the binding pockets of proteins, a notoriously difficult problem in structural biology. Traditional docking algorithms search through possible orientations using scoring rules, often producing noisy results. DiffDock instead frames docking as a probabilistic reasoning problem: a diffusion model iteratively refines guesses until it converges on the most likely binding mode.
“In just a couple of minutes, the model predicted that enterololin binds to a protein complex called LolCDE, which is essential for transporting lipoproteins in certain bacteria,” says Barzilay, who also co-leads the Jameel Clinic. “That was a very concrete lead — one that could guide experiments, rather than replace them.”
Stokes’ group then put that prediction to the test. Using DiffDock predictions as an experimental GPS, they first evolved enterololin-resistant mutants of E. coli in the lab, which revealed that changes in the mutant’s DNA mapped to lolCDE, precisely where DiffDock had predicted enterololin to bind. They also performed RNA sequencing to see which bacterial genes switched on or off when exposed to the drug, as well as used CRISPR to selectively knock down expression of the expected target. These laboratory experiments all revealed disruptions in pathways tied to lipoprotein transport, exactly what DiffDock had predicted.
“When you see the computational model and the wet-lab data pointing to the same mechanism, that’s when you start to believe you’ve figured something out,” says Stokes.
For Barzilay, the project highlights a shift in how AI is used in the life sciences. “A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” she says. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”
That distinction matters because mechanism-of-action studies are often a major rate-limiting step in drug development. Traditional approaches can take 18 months to two years, or more, and cost millions of dollars. In this case, the MIT–McMaster team cut the timeline to about six months, at a fraction of the cost.
Enterololin is still in the early stages of development, but translation is already underway. Stokes’ spinout company, Stoked Bio, has licensed the compound and is optimizing its properties for potential human use. Early work is also exploring derivatives of the molecule against other resistant pathogens, such as Klebsiella pneumoniae. If all goes well, clinical trials could begin within the next few years.
The researchers also see broader implications. Narrow-spectrum antibiotics have long been sought as a way to treat infections without collateral damage to the microbiome, but they have been difficult to discover and validate. AI tools like DiffDock could make that process more practical, rapidly enabling a new generation of targeted antimicrobials.
For patients with Crohn’s and other inflammatory bowel conditions, the prospect of a drug that reduces symptoms without destabilizing the microbiome could mean a meaningful improvement in quality of life. And in the bigger picture, precision antibiotics may help tackle the growing threat of antimicrobial resistance.
“What excites me is not just this compound, but the idea that we can start thinking about the mechanism of action elucidation as something we can do more quickly, with the right combination of AI, human intuition, and laboratory experiments,” says Stokes. “That has the potential to change how we approach drug discovery for many diseases, not just Crohn’s.”
“One of the greatest challenges to our health is the increase of antimicrobial-resistant bacteria that evade even our best antibiotics,” adds Yves Brun, professor at the University of Montreal and distinguished professor emeritus at Indiana University Bloomington, who wasn’t involved in the paper. “AI is becoming an important tool in our fight against these bacteria. This study uses a powerful and elegant combination of AI methods to determine the mechanism of action of a new antibiotic candidate, an important step in its potential development as a therapeutic.”
Corso, Barzilay, and Stokes wrote the paper with McMaster researchers Denise B. Catacutan, Vian Tran, Jeremie Alexander, Yeganeh Yousefi, Megan Tu, Stewart McLellan, and Dominique Tertigas, and professors Jakob Magolan, Michael Surette, Eric Brown, and Brian Coombes. Their research was supported, in part, by the Weston Family Foundation; the David Braley Centre for Antibiotic Discovery; the Canadian Institutes of Health Research; the Natural Sciences and Engineering Research Council of Canada; M. and M. Heersink; Canadian Institutes for Health Research; Ontario Graduate Scholarship Award; the Jameel Clinic; and the U.S. Defense Threat Reduction Agency Discovery of Medical Countermeasures Against New and Emerging Threats program.
The researchers posted sequencing data in public repositories and released the DiffDock-L code openly on GitHub.
By using AI to sift through more than 10,000 molecules, researchers found enterololin (inset), a compound that blocks a key pathway in harmful gut bacteria and, in mice with IBD, eased infection without disturbing the rest of the microbiome.
Serendipitous meetings, scholarly collaborations, and an ethos of "encouraging junior faculty to think big" laid the groundwork for groundbreaking achievement in AI.
Serendipitous meetings, scholarly collaborations, and an ethos of "encouraging junior faculty to think big" laid the groundwork for groundbreaking achievement in AI.
Work & Economy
The fear: Wholesale cheating with AI at work, school. The reality: It’s complicated.
ChatGPT usage appears ‘more wholesome and practical’ than researchers expected
Christy DeSmith
Harvard Staff Writer
October 3, 2025
7 min read
By and large, it appears school and work assignments are not being outsourced entirely to ChatGPT. A new working paper by David Deming, Danoff
The fear: Wholesale cheating with AI at work, school. The reality: It’s complicated.
ChatGPT usage appears ‘more wholesome and practical’ than researchers expected
Christy DeSmith
Harvard Staff Writer
7 min read
By and large, it appears school and work assignments are not being outsourced entirely to ChatGPT. A new working paper by David Deming, Danoff Dean of Harvard College, uncovers the more mundane realities of people’s AI habits.
“It’s more wholesome and practical than I expected,” said Deming, a labor economist who also serves as Harvard Kennedy School’s Isabelle and Scott Black Professor of Political Economy. “I think that’s a good story. But it’s probably a disappointment if you think this thing is taking over the world. It’s also not a very good story for those predicting huge productivity gains.”
Deming’s large-scale study, co-authored with in-house economists at OpenAI, explores both the who and the how of ChatGPT usage worldwide. Key findings show rapid uptake has eased or even erased demographic gaps related to geography and gender. A separate set of analyses drew on a huge sample of anonymized messages to more accurately situate the technology’s everyday role as a researcher and gut-check.
“People have found that it’s great to have an assistant, an adviser, and a guide,” Deming said. “Sure, you can use it to automate things, but the prompting is important, and you really have to go back and forth with it. Whereas there’s very low friction in just asking it for advice or feedback.”
“People have found that it’s great to have an assistant, an adviser, and a guide.”
File photo by Stephanie Mitchell/Harvard Staff Photographer
Deming, who remains bullish on college graduates’ career prospects in the 21st century, recently published two high-profile inquiries into AI adoption and labor market disruptions. He was presenting his research last spring at the Bay Area headquarters of OpenAI, the artificial intelligence company that developed ChatGPT, when Ronnie Chatterjee, the firm’s chief economist, proposed partnering on a third study based on internal data.
This was before Deming was announced as dean of Harvard College.
“A couple months into the project, I said to the team, ‘I decided to take on this new job. But don’t worry. I’m still committed to the paper,’” he recalled with a laugh.
Their findings show ChatGPT outpacing Google’s historic growth. As Deming noted in a recent Substack, it took Google eight years to reach 1 billion daily messages following its public debut in 1999. ChatGPT, released in November 2022, reached that milestone in less than two years.
As of July 2025, they find approximately 10 percent of the global adult population is using the technology, with adults ages 18 to 25 responsible for nearly half of the platform’s 2.6 billion daily messages.
“I knew young people would be heavier users,” offered Deming, noting the analysis excluded minors. “But the scale is surprising. It suggests this generation will be truly AI native.”
Deming et al. wondered whether the well-documented gender gap in ChatGPT adoption rates were closing with the product’s growth. They approached the question by studying whether users had traditionally masculine or feminine names, according to a variety of data sets including the Social Security Administration’s annual ranking of popular baby names for boys and girls.
In early 2023, just months after the ChatGPT product launch, the economists found roughly 80 percent of weekly active users had traditionally male names. As of July 2025, users with traditionally female names constituted just over half of all users.
“That crossover happened in the last few months,” Deming noted.
Also surprising was the fact that people in middle-income countries, including South Korea and Chile, are now adopting the technology faster than those in the wealthiest economies.
“There’s no longer a big difference in usage between people in Brazil and the U.S.,” Deming said.
“I knew young people would be heavier users. But the scale is surprising. It suggests this generation will be truly AI native.”
Other findings concern how people are using the technology and how usage varies across demographics. This part of the research, completed with careful attention to protecting user privacy, meant developing a taxonomy for various kinds of ChatGPT prompts.
The job of categorizing nearly 1 million messages, sent between May 2024 and June 2025, ultimately fell to — what else? — ChatGPT-5.
“We asked the large language model whether each message was work-related — or whether it was asking for tutoring or teaching, whether it was asking about what products to buy, whether it was asking for personal advice,” Deming explained.
As of June 2024, the data show an even split between work and personal messages. A year later, personal usage had far outpaced anything work-related — especially among young adults — and accounted for nearly three-quarters of all messages sent via ChatGPT’s consumer plans.
To verify the accuracy of this research method, results were compared with classifications made by humans. Additional tests showed the technology accurately categorizing submissions drawn from WildChat, a public database of voluntarily submitted ChatGPT messages.
Roughly 80 percent of messages fell into three categories. Those categorized as “seeking information” grew from 14 to 24 percent between July 2024 and July 2025.
“This is basically the same thing as Google search,” Deming said. “But it’s maybe a little bit easier since you don’t need to scroll through a bunch of websites. It just gives you the answer.”
“The way people are using it is so general that it applies to every job. It makes me even more skeptical of the narrative that AI is replacing entry-level positions.”
Another popular category called “practical guidance” held steady, totaling roughly 29 percent of messages over the same period.
“These messages are a little more customized,” Deming explained. “It could be something like: ‘I’m a 65-year-old who hurt my left hamstring. Give me some stretches to do.’
“And if you don’t like the answer,” he added, “you can just say so. You’re having a conversation with the chatbot that is adaptive to your request. That’s something a traditional web search just can’t do.”
“Writing,” the top use for work-related messages, fell from 36 to 24 percent of messages over the period studied.
“Actually, most of the ‘writing’ usage isn’t just writing,” Deming clarified. “It’s summarizing documents, critiquing op-eds, cutting 1,000 words down to 800, or translating a text into Farsi.”
Work-related messages, far more common with educated users in highly paid professions, were subjected to additional scrutiny. These inputs were specifically mapped to work activities listed in the U.S. Department of Labor-sponsored Occupational Information Network (O*NET) database, with “documenting/recording information” and “making decisions and solving problems” emerging as top messaging categories by users in nearly every occupation.
As Deming tells it, white-collar professionals across industries are applying ChatGPT to a similar set of tasks.
“If you look at educators, the top use case isn’t a category called ‘training and teaching others,’” he said. “It’s ‘documenting/recording information.’ And for sales occupations, the top task isn’t ‘selling or influencing others.’ It’s ‘making decisions and solving problems.’
“The way people are using it is so general that it applies to every job,” concluded Deming, who’s working with a different set of colleagues to launch an AI tracker, with regular servings of data-backed insights on usage and labor market impacts in the U.S. “It makes me even more skeptical of the narrative that AI is replacing entry-level positions.”
Mike Pence (right) with Archon Fung. Niles Singer/Harvard Staff Photographer
Nation & World
U.S. needs to keep its friends closer, Pence says
First-term Trump VP: ‘If America isn’t leading the free world, the free world is not being led.’
Alvin Powell
Harvard Staff Writer
October 3, 2025
4 min read
Former Vice President Mike Pence said Tuesday that signs of diminished U.S. support for
First-term Trump VP: ‘If America isn’t leading the free world, the free world is not being led.’
Alvin Powell
Harvard Staff Writer
4 min read
Former Vice President Mike Pence said Tuesday that signs of diminished U.S. support for longtime allies have left him worried about conflict and strife akin to one of the deadliest eras in world history.
“I think we’re living in a very perilous time where America needs to be strong, we need to be ready, we need to stand with our allies, and we need to make it clear to enemies of freedom that — as President Kennedy said — we will bear any burden, pay any price to ensure the survival of liberty,” Pence said. “We stay strong, we stay unwavering, make it clear to people that we’re going to defend our interests and our allies in the world, and we got a shot at a peaceful future. Failing that, I think the second half of the 21st century could look a whole lot more like the first half of the 20th century.”
Pence spoke at the Kennedy School during an event hosted by the Institute of Politics and moderated by Archon Fung, director of the Ash Center for Democratic Governance and Innovation. It came months after Pence was awarded the Profile in Courage Award by the John F. Kennedy Library Foundation in Boston for his role certifying the 2020 election results as rioters surrounded the U.S. Capitol.
The U.S. is irreplaceable on the world stage because there’s no other allied nation that other countries will follow, Pence said, linking his own recent run for president to his sense that Donald Trump and others in the GOP were stepping back from global leadership.
“What drew me into my brief but memorable campaign back in 2023 was that I saw my old running mate and many in our party departing from those core ideals and principles,” Pence said. “If America isn’t leading the free world, the free world is not being led. There is no B team, no backup country that steps into that gap.”
Specifically, Pence said that the U.S. should help put Ukraine in a better position to defeat Russia. Otherwise, he said, the risk of a third world war will rise.
“Anyone who thinks yielding to a rapacious dictator avoids World War III needs to study World War II,” Pence said. “In my judgment, Vladimir Putin won’t stop until he’s stopped.”
Pence didn’t break with the current administration completely. He cited his support for lower taxes. He cast doubt on global warming prescriptions and insisted that climate solutions come from free market, rather than regulatory, approaches. He also voiced support for the Supreme Court’s 2024 decision overturning the Chevron doctrine, which had given regulators a powerful voice in interpreting and filling in gaps in laws passed by Congress.
Pence said that his Christian faith has been a powerful motivator for him in public life and that he believes that democracy requires that the people share a common moral order. He also affirmed the First Amendment’s religious freedom clauses.
He traced to his faith his belief that civility in politics is not just a courtesy to opponents, but an important feature that allows people to meet across differences.
“Democracy depends on heavy doses of civility,” he said. “When we’re civil with one another, even when we disagree, we have the opportunity to find common cause, not compromising principles or core values, but actually finding ways to work together to advance the country.”
Fung thanked Pence for his actions on Jan. 6, 2021, when Pence refused to bow to pressure from President Trump and the protesters invading the Capitol as Congress gathered to certify Joe Biden’s election victory. He also asked the former vice president about his faith in the durability of U.S. institutions.
Recalling Jan. 6, Pence said, “Many around the world were watching and I think they saw the resilience of our institutions and the strength of our institutions.”
He added: “I have confidence in the days ahead that Republicans and Democrats will hew to those roots and to that duty.”
The event, which sees the Presidents representing the losing teams of the previous year’s races formally challenge those from the winning teams, was held at London’s Somerset House.
Oxford’s Tobias Bernard and Heidi Long and their victorious Cambridge counterparts Noam Mouelle (Hughes Hall) and Gemma King (St John’s College) faced off before shaking hands in front of the London audience.
First raced in 1829 between the University of Oxford and the University of Cambridge, The Boat Race 2026 wi
The event, which sees the Presidents representing the losing teams of the previous year’s races formally challenge those from the winning teams, was held at London’s Somerset House.
Oxford’s Tobias Bernard and Heidi Long and their victorious Cambridge counterparts Noam Mouelle (Hughes Hall) and Gemma King (St John’s College) faced off before shaking hands in front of the London audience.
First raced in 1829 between the University of Oxford and the University of Cambridge, The Boat Race 2026 will be held on Easter Saturday 04 April with the Women’s Boat Race at 14.21 and the Men’s Boat Race an hour later, at 15.21.
Cambridge, the Light Blues, have held the upper hand with a clean sweep of both Boat Races in 2025. Oxford, the Dark Blues, will be determined to turn the tide in 2026. The records stand at 88-81 in favour of Cambridge Men and 49-30 to Cambridge Women.
The event also saw Channel 4 welcomed as the new free-to-air TV home of The Boat Race, with the broadcaster announcing that they have secured the rights in a multi-year agreement.
Siobhan Cassidy, Chair of The Boat Race Company, said: “We are delighted to work with Channel 4 to broadcast our unique, iconic and intensely British event between our two world-leading Universities.”
In other news, Cambridge University Boat Club (CUBC) has announced that Chair, Annamarie Phelps CBE OLY, has been elected Vice President of World Rowing. Phelps learnt to row at St John’s College and raced for Blondie (Cambridge Women’s Reserves) in 1987, beating Osiris (Oxford Women’s Reserves) by 12 seconds. She went on to represent Great Britain at the 1996 Summer Olympics in Atlanta and five World Championships, winning gold at Račice in 1993.
And a week of fantastic racing proved golden for Cambridge at the 2025 World Rowing Championships in Shanghai, China, with three alumni - James Robson (Peterhouse), Douwe de Graaf (St Edmund’s) and George Bourne (Peterhouse) and one current trialist, Camille VanderMeer (Peterhouse) - crowned champions, along with a slate of other impressive performances. Read the full results on the CUBC website.
The starting gun has been fired on The Boat Race 2026 with the staging of the historic Presidents’ Challenge.
Five NUS professors were bestowed the nation’s top accolades on 3 October 2025, in recognition of their exceptional contributions to science across diverse fields.Organised by the National Research Foundation (NRF), the President’s Science and Technology Awards (PSTA), Singapore’s highest honours for researchers and engineers, acknowledges the achievements of outstanding individuals in stimulating the country’s science and technology (S&T) ecosystem.Mr Tharman Shanmugaratnam, President of th
Five NUS professors were bestowed the nation’s top accolades on 3 October 2025, in recognition of their exceptional contributions to science across diverse fields.
Organised by the National Research Foundation (NRF), the President’s Science and Technology Awards (PSTA), Singapore’s highest honours for researchers and engineers, acknowledges the achievements of outstanding individuals in stimulating the country’s science and technology (S&T) ecosystem.
Mr Tharman Shanmugaratnam, President of the Republic of Singapore, presented the President’s Science and Technology Medal (PSTM) to Professor Tan Eng Chye, NUS President. This prestigious award is conferred to individuals who have made distinguished, sustained and exceptional contributions, and played a strategic role in advancing Singapore’s development through promotion and management of S&T.
Two rising NUS research stars received the Young Scientist Award (YSA) from Mr Heng Swee Keat, Chairman of NRF. Administered by the Singapore National Academy of Science, the YSA is presented to researchers aged 40 and below, who are actively engaged in R&D in Singapore, and who have shown great potential to be world-class researchers in their respective research fields. These accomplished researchers are: Assistant Professor Andy Tay, Presidential Young Professor at CDE’s Department of Biomedical Engineeringand Principal Investigator at iHealthtech and NUS Tissue Engineering Programme, as well as Assistant Professor Wang Xinchao, Presidential Young Professor at CDE’s Department of Electrical and Computer Engineering.
2025 President’s Science & Technology Medal Recipient: Professor Tan Eng Chye
Prof Tan Eng Chye was conferred the President’s Science and Technology Medal (PSTM) for his transformative contributions in advancing Singapore’s research and innovation landscape through interdisciplinary education, international partnerships, deep tech innovation and ecosystem building. He was also recognised for his achievements in nurturing future leaders and elevating Singapore’s global standing in science and technology.
Prof Tan substantially advanced the recruitment and development of top talent and brought NUS research to new levels of world-class excellence. In education, he instituted far-reaching reforms that have shaped S&T education in Singapore. Recognising the need for graduates with broad perspectives and technical depth, he led NUS to evolve from a disciplinary model towards one that is more flexible and interdisciplinary. This is exemplified by the curricula of the College of Humanities and Sciences, College of Design and Engineering and NUS College.
With an eye for talent, Prof Tan pioneered and developed schemes to recruit and nurture scientific talent for Singapore. This included bringing in eminent scientists to lead programmes in strategic areas and appointing outstanding directors to helm the Research Centres of Excellence (RCEs) at NUS. He also championed multiple talent and research initiatives, such as the NUS Overseas Graduate Scholarships, Overseas Postdoctoral Fellowships and Presidential Young Professorship scheme to attract and cultivate future academic leaders.
Prof Tan shaped a deeply collaborative culture within NUS and pushed for stronger partnerships between NUS and research groups across Singapore as well as with leading centres worldwide. Under his stewardship, translational research and research commercialisation at NUS took off strongly. In tandem, he also enhanced NUS’ global engagement, with a special focus on Southeast Asia.
Prof Tan has also played a significant role in shaping Singapore’s S&T ecosystem through his leadership capacities across national agencies, including the NRF, Singapore Economic Development Board, National University Health System, NUS High School of Mathematics and Science, Agency for Science Technology and Research (A*STAR), Defence Science and Technology Agency, and Defence Science Organisation.
Prof Tan’s visionary leadership and lifelong dedication have been pivotal in placing Singapore’s S&T capabilities and achievements firmly on the world map.
“The large part of my education and career has been at NUS when I was only 20 years old. It fills me with both awe and pride watching the transformation of NUS from a modest teaching university in the 1980s into a comprehensive and globally recognised institution,” said Prof Tan.
“I feel privileged to have contributed to this journey. It has felt, at times, like climbing a mountain. The path has been steep and challenging. But when you pause and look back, you get a different perspective. You see the distance covered, the obstacles overcome, and the shared determination that has brought Singapore to where it is today,” he added.
2025 President’s Science Award Recipient: Professor Lim Chwee Teck
Professor Lim Chwee Teck was awarded the President’s Science Award (PSA) for his pioneering contributions to cancer research through innovative mechanobiology approaches, successfully bridging engineering, biological sciences and medicine to foster a deeper understanding of cancer metastasis.
Prof Lim’s trailblazing research resulted in a paradigm shift in our understanding of cancer metastasis – the spread of cancer from the primary tumour site to other parts of the body – which is the leading cause of cancer mortality. His work introduced the concept of “mechanoresilience”, unveiling why only a small population of cancer cells survive the treacherous journey through the blood circulatory system. Using custom-made microfluidic platforms to simulate the extreme physical and mechanical conditions, Prof Lim and his team identified the distinctive characteristics of these mechanoresilient cancer cells that confer survival advantage and treatment resistance. These revolutionary findings pave the way for more innovative and effective cancer treatment and better diagnostic tools to predict and address metastatic risk.
Prof Lim shared, “The President’s Science Award is a profound honour that recognises not only my team’s work, but also equally important, reinforces our commitment to push the frontiers of science, mentor the next generation, and translate discoveries into tangible benefits for society.”
Highly decorated with numerous local and international awards, Prof Lim is also an elected fellow of 10 esteemed academies, reflecting global recognition of his contributions, leadership and impact. Beyond his scientific excellence, Prof Lim is also a serial entrepreneur, having co-founded six start-ups, including one that commercialised a cancer biochip and achieved a successful IPO in 2018. This cancer biochip earned him the President’s Technology Award in 2011.
At NUS, Prof Lim serves as Director of iHealthtech, where he leads multidisciplinary teams to drive advances in healthcare. He also holds appointments in multiple departments and units in the university, including the Department of Biomedical Engineering of CDE and Mechanobiology Institute.
2025 President’s Science Award Recipient: Adjunct Professor Lisa Ng
Adjunct Professor Lisa Ng, who is also Executive Director at A*STAR’s Infectious Disease Labs and Biomedical Research Council, earned the President’s Science Award (PSA) for her pioneering contributions to viral infection immunology and advancing global pandemic management through groundbreaking research on Arboviruses, in particular, Chikungunya.
Chikungunya virus is an Arbovirus, a type of virus transmitted by arthropods such as mosquitoes. Initially overshadowed by dengue, Prof Ng was amongst the first to underscore the threat of Chikungunya. Her team’s work revealed the viral mechanism that explained patients’ conflicting reactions to the infection and identified immune profiles that predict disease outcomes. These valuable insights opened the door to improved immune-based diagnostics, vaccines and host-based therapeutics to better tackle the spread of Chikungunya.
Prior to Chikungunya, Prof Ng also developed PCR-based tests for the 2003 SARS outbreaks, and H5N1 during the 2005–2006 bird flu outbreaks. Her molecular and immunoassays for multiple pathogens have been shared globally. During the COVID-19 pandemic, her team’s work guided national vaccination strategies and safety measures.
Prof Ng is a leading advocate for pandemic preparedness who translates lab discoveries into real‑world solutions through her close collaborations with academia, public health agencies, industry and global research networks. Her leadership has strengthened surveillance systems, informed vaccine pipelines and advanced international cooperation. Committed to mentorship, Prof Ng has supervised more than 20 PhD students and postdoctoral fellows, many of whom are now leading their own research programmes worldwide.
“This award is an honour that reflects the collective efforts of my colleagues and collaborators,” shared Prof Ng. “The various outbreaks, epidemics and COVID-19 pandemic reminded me how resilience and partnerships are vital in science. To young researchers, let curiosity guide you, and see every challenge as a chance to grow.”
2025 Young Scientist Award Recipient: Assistant Prof Andy Tay
Assistant Professor Andy Tay was presented the Young Scientist Award (YSA) for advancing biomaterial-based therapies that modulate immune responses to improve diabetic wound healing and enhance cancer immunotherapy outcomes.
To promote wound healing in diabetic patients, Asst Prof Tay’s team established a 4R (Remove, Reprogram, Replace, Reimagine) strategy that generates an optimal amount of an essential immune cell. In preclinical models, this 4R strategy demonstrated accelerated wound healing by up to 200% compared to existing therapies.
As for the development of cancer immunotherapies, Asst Prof Tay’s lab team engineered nanostraws – hollow tubes about 10,000 times smaller than a grain of rice – to deliver proteins, RNA and DNA that genetically enhance the ability of immune T cells to detect and destroy cancer cells.
A prolific scientist and devoted educator, Asst Prof Tay has garnered more than S$8.5 million in research funding as the sole principal investigator, filed 7 invention disclosures, and published 32 research papers as the corresponding author. Meanwhile, his lab has trained 19 postdoctoral researchers and research assistants, 22 graduate students and 40 undergraduate students.
When asked to give advice to young, aspiring researchers, Asst Prof Tay, who also teaches at NUS College, said, “Ask lots of questions. Students may be scared to ask questions because they may appear insignificant, but honestly, even basic questions like cell density and molecule concentration can make such a big impact on research outcome.” And he emphasized, “No question is too small.”
2025 Young Scientist Award Recipient: Assistant Professor Wang Xinchao
Assistant Professor Wang Xinchao received the Young Scientist Award (YSA) for advancing machine learning techniques that train compact Artificial Intelligence (AI) models using limited resources, while achieving the capabilities of larger AI systems.
Asst Prof Wang’s research addresses the complex challenges that advanced AI models face through three interconnected domains: efficient strategies, efficient models and efficient data. In efficient strategies, he built DepGraph, automating time-consuming laborious work with just three lines of code. Torch-pruning, DepGraph’s open-source counterpart, gained widespread popularity, exceeding 290,000 downloads, and has been integrated into NVIDIA’s commercial products. In efficient models, Asst Prof Wang designed MetaFormer, which greatly reduces computational cost and model size while preserving or even improving performance. In efficient data, he conceived approaches to shrink huge datasets into smaller, representative datasets that cut down computational burden yet maintain or boost model performance.
Asst Prof Wang’s innovative outputs have tremendously lowered computational and financial barriers to AI development, enabling smaller laboratories, start-ups and even individuals to train competitive models despite limited hardware as well as computational and memory constraints. The impact of his contributions also been recognised through other prestigious honours such as the Institute of Electrical and Electronics Engineers (IEEE) AI’s 10 to Watch and the NUS’ Young Researcher Award.
“To young researchers in the field, I would like to share a simple yet powerful piece of advice from my postdoctoral supervisor, the late Professor Thomas S. Huang: “Just be yourself.” In a field that moves rapidly and is often filled with noise, staying true to your values, your curiosity and your unique perspective is both grounding and empowering.” Asst Prof Wang elaborated, “Trust your instincts, embrace the unknown, and remember that meaningful contributions often come from perseverance, not perfection.”
For the first time, a new ultrasound technique allows researchers to stimulate multiple locations in the brain simultaneously. This opens up new possibilities for treating devastating brain diseases such as Alzheimer’s, Parkinson’s and depression in the future.
For the first time, a new ultrasound technique allows researchers to stimulate multiple locations in the brain simultaneously. This opens up new possibilities for treating devastating brain diseases such as Alzheimer’s, Parkinson’s and depression in the future.
By Dr Daniel Chan, Assistant Dean (Office of Programmes) at the Faculty of Arts and Social Sciences, and Senior Lecturer of French at the Centre for Language Studies at NUSCNA Online, 29 September 2025
By Dr Daniel Chan, Assistant Dean (Office of Programmes) at the Faculty of Arts and Social Sciences, and Senior Lecturer of French at the Centre for Language Studies at NUS
Last year, MIT physicists reported in the journal Nature that electrons can become fractions of themselves in graphene, an atomically thin form of carbon. This exotic electronic state, called the fractional quantum anomalous Hall effect (FQAHE), could enable more robust forms of quantum computing.Now two young MIT-affiliated physicists involved in the discovery of FQAHE have been named the 2025 recipients of the McMillan Award from the University of Illinois for their work. Jiaqi Cai and Zhenggu
Last year, MIT physicists reported in the journal Nature that electrons can become fractions of themselves in graphene, an atomically thin form of carbon. This exotic electronic state, called the fractional quantum anomalous Hall effect (FQAHE), could enable more robust forms of quantum computing.
Now two young MIT-affiliated physicists involved in the discovery of FQAHE have been named the 2025 recipients of the McMillan Award from the University of Illinois for their work. Jiaqi Cai and Zhengguang Lu won the award “for the discovery of fractional anomalous quantum hall physics in 2D moiré materials.”
Cai is currently a Pappalardo Fellow at MIT working with Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, and collaborating with several other labs at MIT including Long Ju, the Lawrence and Sarah W. Biedenharn Career Development Associate Professor in the MIT Department of Physics. He discovered FQAHE while working in the laboratory of Professor Xiaodong Xu at the University of Washington.
Lu discovered FQAHE while working as a postdoc Ju's lab and has since become an assistant professor at Florida State University.
The two independent discoveries were made in the same year.
“The McMillan award is the highest honor that a young condensed matter physicist can receive,” says Ju. “My colleagues and I in the Condensed Matter Experiment and the Condensed Matter Theory Group are very proud of Zhengguang and Jiaqi.”
Ju and Jarillo-Herrero are both also affiliated with the Materials Research Laboratory.
In addition to a monetary prize and a plaque, Lu and Cai will give a colloquium on their work at the University of Illinois this fall.
The fractional quantum Hall effect has generally been seen under very high magnetic fields, but MIT physicists have now observed it in simple graphene. In a five-layer graphene/hexagonal boron nitride (hBN) moire superlattice, electrons (blue ball) interact with each other strongly and behave as if they are broken into fractional charges.
The Martin Trust Center for MIT Entrepreneurship announced that Ana Bakshi has been named its new executive director. Bakshi started in the role earlier this month at the start of the school year and will collaborate closely with the managing director, Ethernet Inventors Professor of the Practice Bill Aulet, to elevate the center to higher levels.“Ana is uniquely qualified for this role through her knowledge and experience in entrepreneurship education at the highest levels, paired with her exce
The Martin Trust Center for MIT Entrepreneurship announced that Ana Bakshi has been named its new executive director. Bakshi started in the role earlier this month at the start of the school year and will collaborate closely with the managing director, Ethernet Inventors Professor of the Practice Bill Aulet, to elevate the center to higher levels.
“Ana is uniquely qualified for this role through her knowledge and experience in entrepreneurship education at the highest levels, paired with her exceptional leadership and execution skills,” says Aulet. “Ana is committed to creating the highest-quality centers and institutes for entrepreneurs, first at King’s College London, where we met over 10 years ago, and then at Oxford University. This ideal skill set is compounded by her experience in leading high-growth companies, most recently as the chief operation officer in an award-winning AI startup. I’m honored and thrilled to welcome her to MIT — her knowledge and energy will greatly elevate our community, and the field as a whole.”
A rapidly changing environment creates imperative for raising the bar for entrepreneurship education
The need to raise the bar for innovation-driven entrepreneurship education is of utmost importance in today's world. The rate of change is getting faster and faster every day, especially with artificial intelligence, and is generating new problems that need to be solved, as well as exacerbating existing problems in climate, health care, manufacturing, future of work, education, and economic stratification, to name but a few. The world needs more entrepreneurs and better entrepreneurs.
Bakshi joins the Trust Center at a time when MIT is at the forefront of helping to develop people and systems that can turn challenges into opportunities using an entrepreneurial mindset, skill set, and way of operating. Bakshi’s deep experience and success will be key to unlocking this opportunity. “I am honored to be joining MIT and the Trust Center at a time when education and entrepreneurship have the chance to shape the greatest good for the greatest number,” Bakshi says. “In an era defined by both extraordinary challenges and extraordinary possibilities, the future will be built by those bold enough to try, and MIT will be at the forefront of this.”
Translating academic research into real-world impact
Bakshi has built two world-class entrepreneurship centers from the ground up. She served as the founding director at King’s College and then at Oxford. In this role, she was responsible for all aspects of these centers, including fundraising.
While at Oxford, she authored a data-driven approach to determining efficacy of outcomes for their programs, as evidenced by a 61-page study, “Universities: Drivers of Prosperity and Economic Recovery.”
As the director of the Oxford Foundry (Oxford’s cross-university entrepreneurship center), Bakshi focused on investing in ambitious founders and talent. The center was backed by global entrepreneurial leaders such as the founders of LinkedIn and Twitter, with corporate partnerships including Santander and EY, and investment funds including Oxford Science Enterprises (OSE). As of 2021, the startups supported by the Foundry and King’s College have raised over $500 million and have created nearly 3,000 jobs, spanning diverse industries including health tech, climate tech, cybersecurity, fintech, and deep tech spinouts focusing on world-class science.
In addition, she built the highly successful and economically sustainable Entrepreneurship School, Oxford’s first digital online learning platform.
Bakshi comes to MIT after having worked in the private sector as the chief operating officer (COO) in a rapidly growing artificial intelligence startup for almost two years, Quench.ai, with offices in London and New York City. She was the first C-suite employee at Quench.ai, serving as COO and now senior advisor, helping companies unlock value from their knowledge through AI.
Right place, right time, right person moving at the speed of MIT AI
Since its inception, then turbocharged in the 1940s with the creation and operation of the RadLab, and continuing to this day, entrepreneurship is at the core of MIT’s identity and mission.
"MIT has been a leader in entrepreneurship for decades. It’s now the third leg of the school, alongside teaching and research,” says Mark Gorenberg ’76, chair of the MIT Corporation. “I’m excited to have such a transformative leader as Ana join the Trust Center team, and I look forward to the impact she will have on the students and the wider academic community at MIT as we enter an exciting new phase in company building, driven by the accelerated use of AI and emerging technologies."
“In a time where we are rethinking management education, entrepreneurship as an interdisciplinary field to create impact is even more important to our future. To have such an experienced and accomplished leader in academia and the startup world, especially in AI, reinforces our commitment to be a global leader in this field,” says Richard M. Locke, John C Head III Dean at the MIT Sloan School of Management.
“MIT is a unique hub of research, innovation, and entrepreneurship, and that special mix creates massive positive impact that ripples around the world,” says Frederic Kerrest, MIT Sloan MBA ’09, co-founder of Okta, and member of the MIT Corporation. “In a rapidly changing, AI-driven world, Ana has the skills and experience to further accelerate MIT’s global leadership in entrepreneurship education to ensure that our students launch and scale the next generation of groundbreaking, innovation-driven startups.”
Prior to her time at Oxford and King’s College, Bakshi served as an elected councilor representing 6,000-plus constituents, held roles in international nongovernmental organizations, and led product execution strategy at MAHI, an award-winning family-led craft sauce startup, available in thousands of major retailers across the U.K. Bakshi sits on the advisory council for conservation charity Save the Elephants, leveraging AI-driven and scientific approaches to reduce human-wildlife conflict and protect elephant populations. Her work and impact have been featured across FT, Forbes, BBC, The Times, and The Hill. Bakshi was twice honored as a Top 50 Woman in Tech (U.K.), most recently in 2025.
“As AI changes how we learn, how we build, and how we scale, my focus will be on helping MIT expand its support for phenomenal talent — students and faculty — with the skills, ecosystem, and backing to turn knowledge into impact,” Bakshi says.
35 years of impact to date
The Trust Center was founded in 1990 by the late Professor Edward Roberts and serves all MIT students across all schools and all disciplines. It supports 60-plus courses and extensive extracurricular programming, including the delta v academic accelerator. Much of the work of the center is generated through the Disciplined Entrepreneurship methodology, which offers a proven approach to create new ventures. Over a thousand schools and other organizations across the world use Disciplined Entrepreneurship books and resources to teach entrepreneurship.
Now, with AI-powered tools like Orbit and JetPack, the Trust Center is changing the way that entrepreneurship is taught and practiced. Its mission is to produce the next generation of innovation-driven entrepreneurs while advancing the field more broadly to make it both rigorous and practical. This approach of leveraging proven evidence-based methodology, emerging technology, the ingenuity of MIT students, and responding to industry shifts is similar to how MIT established the field of chemical engineering in the 1890s. The desired result in both cases was to create a comprehensive, integrated, scalable, rigorous, and practical curriculum to create a new workforce to address the nation’s and world’s greatest challenges.
The new TX-Generative AI Next (TX-GAIN) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) is the most powerful AI supercomputer at any U.S. university. With its recent ranking from TOP500, which biannually publishes a list of the top supercomputers in various categories, TX-GAIN joins the ranks of other powerful systems at the LLSC, all supporting research and development at Lincoln Laboratory and across the MIT campus. "TX-GAIN will enable our researchers to achieve scie
The new TX-Generative AI Next (TX-GAIN) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) is the most powerful AI supercomputer at any U.S. university. With its recent ranking from TOP500, which biannually publishes a list of the top supercomputers in various categories, TX-GAIN joins the ranks of other powerful systems at the LLSC, all supporting research and development at Lincoln Laboratory and across the MIT campus.
"TX-GAIN will enable our researchers to achieve scientific and engineering breakthroughs. The system will play a large role in supporting generative AI, physical simulation, and data analysis across all research areas," says Lincoln Laboratory Fellow Jeremy Kepner, who heads the LLSC.
The LLSC is a key resource for accelerating innovation at Lincoln Laboratory. Thousands of researchers tap into the LLSC to analyze data, train models, and run simulations for federally funded research projects. The supercomputers have been used, for example, to simulate billions of aircraft encounters to develop collision-avoidance systems for the Federal Aviation Administration, and to train models in the complex tasks of autonomous navigation for the Department of Defense. Over the years, LLSC capabilities have been essential to numerous award-winning technologies, including those that have improved airline safety, prevented the spread of new diseases, and aided in hurricane responses.
As its name suggests, TX-GAIN is especially equipped for developing and applying generative AI. Whereas traditional AI focuses on categorization tasks, like identifying whether a photo depicts a dog or cat, generative AI produces entirely new outputs. Kepner describes it as a mathematical combination of interpolation (filling in the gaps between known data points) and extrapolation (extending data beyond known points). Today, generative AI is widely known for its use of large language models to create human-like responses to user prompts.
At Lincoln Laboratory, teams are applying generative AI to various domains beyond large language models. They are using the technology, for instance, to evaluate radar signatures, supplement weather data where coverage is missing, root out anomalies in network traffic, and explore chemical interactions to design new medicines and materials.
To enable such intense computations, TX-GAIN is powered by more than 600 NVIDIA graphics processing unit accelerators specially designed for AI operations, in addition to traditional high-performance computing hardware. With a peak performance of two AI exaflops (two quintillion floating-point operations per second), TX-GAIN is the top AI system at a university, and in the Northeast. Since TX-GAIN came online this summer, researchers have taken notice.
"TX-GAIN is allowing us to model not only significantly more protein interactions than ever before, but also much larger proteins with more atoms. This new computational capability is a game-changer for protein characterization efforts in biological defense," says Rafael Jaimes, a researcher in Lincoln Laboratory's Counter–Weapons of Mass Destruction Systems Group.
The LLSC's focus on interactive supercomputing makes it especially useful to researchers. For years, the LLSC has pioneered software that lets users access its powerful systems without needing to be experts in configuring algorithms for parallel processing.
"The LLSC has always tried to make supercomputing feel like working on your laptop," Kepner says. "The amount of data and the sophistication of analysis methods needed to be competitive today are well beyond what can be done on a laptop. But with our user-friendly approach, people can run their model and get answers quickly from their workspace."
The LLSC systems are housed in an energy-efficient data center and facility in Holyoke, Massachusetts. Research staff in the LLSC are also tackling the immense energy needs of AI and leading research into various power-reduction methods. One software tool they developed can reduce the energy of training an AI model by as much as 80 percent.
"The LLSC provides the capabilities needed to do leading-edge research, while in a cost-effective and energy-efficient manner," Kepner says.
All of the supercomputers at the LLSC use the "TX" nomenclature in homage to Lincoln Laboratory's Transistorized Experimental Computer Zero (TX-0) of 1956. TX-0 was one of the world's first transistor-based machines, and its 1958 successor, TX-2, is storied for its role in pioneering human-computer interaction and AI. With TX-GAIN, the LLSC continues this legacy.
The Princeton University Board of Trustees has approved the appointment of six faculty members, including two full professors, three associate professors and one assistant professor.
The Princeton University Board of Trustees has approved the appointment of six faculty members, including two full professors, three associate professors and one assistant professor.
Artificial intelligence is everywhere, from the apps people use to the systems that shape hiring decisions and healthcare. But what happens when these tools don’t work equally well for everyone? That question drives the research of Allison Koenecke, a new assistant professor of information science at Cornell Tech.
Artificial intelligence is everywhere, from the apps people use to the systems that shape hiring decisions and healthcare. But what happens when these tools don’t work equally well for everyone? That question drives the research of Allison Koenecke, a new assistant professor of information science at Cornell Tech.
Nation & World
‘Vibes or hunches’ don’t help win elections
Ryan D. Enos (left) moderates a talk with North Carolina Sen. Thom Tillis.Photos by Jodi Hilton
Christy DeSmith
Harvard Staff Writer
October 2, 2025
4 min read
Political analytics conference convenes experts on voter trends, election forecasting, behavioral research
How should politicians proceed when gut instinct clashes wit
Ryan D. Enos (left) moderates a talk with North Carolina Sen. Thom Tillis.
Photos by Jodi Hilton
Christy DeSmith
Harvard Staff Writer
4 min read
Political analytics conference convenes experts on voter trends, election forecasting, behavioral research
How should politicians proceed when gut instinct clashes with data analytics?
“You go with the analytics,” said North Carolina Sen. Thom Tillis.
The second-term Republican was on campus last week to help kick off Political Analytics 2025. Organized by the Center for American Political Studies, the one-day conference convened top thinkers on voter trends, election forecasting, and behavioral research. Tillis, a former executive with PricewaterhouseCoopers and IBM, credited an opinion poll with vaulting his legislative career.
He remembered facing a supposedly indominable incumbent in the 2006 Republican primary for a seat in the North Carolina House of Representatives. “I engaged someone to do a poll to determine whether or not the guy could be beaten — where his vulnerability was,” Tillis said. “We went from going after a guy who was supposedly unbeatable to beating him by a two-to-one margin.”
In the audience were pollsters, consultants, and political scientists with expertise in big data. “Politics deserves every bit of sophistication we can bring to it,” said host Ryan D. Enos, a professor of government and director of the Center for American Political Studies. “We can’t leave it to vibes or hunches any more than we can leave medicine to those things.”
Panelists referred to challenges in today’s rapidly changing political environment, including diminished confidence in institutions and a splintering information ecosystem. “We realize even more the need for an analytical approach to politics,” Enos said. “This is why we revived the Harvard Political Analytics Conference after a seven-year hiatus.”
“We realize even more the need for an analytical approach to politics.”
Ryan D. Enos
In one panel, seasoned strategists shared findings from research and field work. David Shor, a data scientist and consultant supporting candidates on the left, pushed back against those who argue Democrats need to run a white man in the next presidential election, noting that his metrics show women candidates often outperform men. Data and behavioral scientist Matt Oczkowski, formerly of Cambridge Analytica, predicted something of a “civil war” on the right, as establishment Republicans fight to win back the party’s soul.
Enos and his co-organizers also assembled a panel to discuss the country’s youngest voters. Rachel Janfaza ’20, founder of the qualitative research firm The Up and Up, outlined her theory of “two Gen Zs,” separated by the pandemic’s disruption of K-12 education. Also highlighted were concerns about AI’s job market impacts and an emerging gender divide on marriage and children.
Multiple panelists grappled with the state of political polling, with support for President Trump underestimated in three consecutive cycles. They discussed whether artificial intelligence could help pollsters reach more independent and Republican voters or if interviewers should try doubling down on in-person methods.
Graduate student Zachary Donnini (left) and panelists Anthony Salvanto, Steve Kornacki, and Harry Enten.
Polling before last year’s general election was still more accurate than historic averages, noted CNN chief data analyst Harry Enten. “The problem is,” he said, “we have had more elections in a row, at least in the popular vote, that were determined by single digits than at any point since we first started recording the popular vote in 1824.”
Presented as a possible counterweight were predictions markets, or online platforms where users can wager on the outcomes of future events. Jaron Zhou ’22 of the predictions market Kalshi noted traders on his site decisively split for President Trump last fall. The Q&A session saw conference-goers raising concerns, including the danger of further eroding trust in noncommercial, nonpartisan political polls.
The Hispanic vote proved another lightning rod, with multiple panelists drawing on the 2024 election for evidence of historic realignment. “I wouldn’t be surprised if in 2026, just because of the general nature of midterm elections, you see a lot of the gains Republicans made with Hispanic voters slide back,” said NBC News chief data analyst Steve Kornacki.
But he likened it to the 1986 midterms, when Democrats flipped Senate seats in Alabama, Georgia, and North Carolina. It hardly heralded the party’s resurgence in the South, Kornacki recalled. “It was a blip.”
He suggested factors including race, gender, and age were already driving more lasting change. “Trends that we’ve seen with white voters are starting to take hold among non-white voters,” Kornacki said, “but specifically, and I think most dramatically and most immediately, with Hispanic voters.”
At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known abo
At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.
This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known about the exact mechanism of this reaction, or the factors that control its rate.
In a new study, MIT researchers have measured lithium intercalation rates in a variety of different battery materials and used that data to develop a new model of how the reaction is controlled. Their model suggests that lithium intercalation is governed by a process known as coupled ion-electron transfer, in which an electron is transferred to the electrode along with a lithium ion.
Insights gleaned from this model could guide the design of more powerful and faster charging lithium-ion batteries, the researchers say.
“What we hope is enabled by this work is to get the reactions to be faster and more controlled, which can speed up charging and discharging,” says Martin Bazant, the Chevron Professor of Chemical Engineering and a professor of mathematics at MIT.
The new model may also help scientists understand why tweaking electrodes and electrolytes in certain ways leads to increased energy, power, and battery life — a process that has mainly been done by trial and error.
“This is one of these papers where now we began to unify the observations of reaction rates that we see with different materials and interfaces, in one theory of coupled electron and ion transfer for intercalation, building up previous work on reaction rates,” says Yang Shao-Horn, the J.R. East Professor of Engineering at MIT and a professor of mechanical engineering, materials science and engineering, and chemistry.
Shao-Horn and Bazant are the senior authors of the paper, which appears today in Science. The paper’s lead authors are Yirui Zhang PhD ’22, who is now an assistant professor at Rice University; Dimitrios Fraggedakis PhD ’21, who is now an assistant professor at Princeton University; Tao Gao, a former MIT postdoc who is now an assistant professor at the University of Utah; and MIT graduate student Shakul Pathak.
Modeling lithium flow
For many decades, scientists have hypothesized that the rate of lithium intercalation at a lithium-ion battery electrode is determined by how quickly lithium ions can diffuse from the electrolyte into the electrode. This reaction, they believed, was governed by a model known as the Butler-Volmer equation, originally developed almost a century ago to describe the rate of charge transfer during an electrochemical reaction.
However, when researchers have tried to measure lithium intercalation rates, the measurements they obtained were not always consistent with the rates predicted by the Butler-Volmer equation. Furthermore, obtaining consistent measurements across labs has been difficult, with different research teams reporting measurements for the same reaction that varied by a factor of up to 1 billion.
In the new study, the MIT team measured lithium intercalation rates using an electrochemical technique that involves applying repeated, short bursts of voltage to an electrode. They generated these measurements for more than 50 combinations of electrolytes and electrodes, including lithium nickel manganese cobalt oxide, which is commonly used in electric vehicle batteries, and lithium cobalt oxide, which is found in the batteries that power most cell phones, laptops, and other portable electronics.
For these materials, the measured rates are much lower than has previously been reported, and they do not correspond to what would be predicted by the traditional Butler-Volmer model.
The researchers used the data to come up with an alternative theory of how lithium intercalation occurs at the surface of an electrode. This theory is based on the assumption that in order for a lithium ion to enter an electrode, an electron from the electrolyte solution must be transferred to the electrode at the same time.
“The electrochemical step is not lithium insertion, which you might think is the main thing, but it’s actually electron transfer to reduce the solid material that is hosting the lithium,” Bazant says. “Lithium is intercalated at the same time that the electron is transferred, and they facilitate one another.”
This coupled-electron ion transfer (CIET) lowers the energy barrier that must be overcome for the intercalation reaction to occur, making it more likely to happen. The mathematical framework of CIET allowed the researchers to make reaction rate predictions, which were validated by their experiments and substantially different from those made by the Butler-Volmer model.
Faster charging
In this study, the researchers also showed that they could tune intercalation rates by changing the composition of the electrolyte. For example, swapping in different anions can lower the amount of energy needed to transfer the lithium and electron, making the process more efficient.
“Tuning the intercalation kinetics by changing electrolytes offers great opportunities to enhance the reaction rates, alter electrode designs, and therefore enhance the battery power and energy,” Shao-Horn says.
Shao-Horn’s lab and their collaborators have been using automated experiments to make and test thousands of different electrolytes, which are used to develop machine-learning models to predict electrolytes with enhanced functions.
The findings could also help researchers to design batteries that would charge faster, by speeding up the lithium intercalation reaction. Another goal is reducing the side reactions that can cause battery degradation when electrons are picked off the electrode and dissolve into the electrolyte.
“If you want to do that rationally, not just by trial and error, you need some kind of theoretical framework to know what are the important material parameters that you can play with,” Bazant says. “That’s what this paper tries to provide.”
The research was funded by Shell International Exploration and Production and the Toyota Research Institute through the D3BATT Center for Data-Driven Design of Rechargeable Batteries.
Lithium intercalation is the process by which lithium ions insert themselves into the solid electrode of a lithium-ion battery. MIT researchers have shown that as lithium ions (green) move from an electrolyte solution (right) to a cobalt oxide electrode (left), electrons also move into the electrode and reduce the cobalt (gray atoms with gold halo).
Campus & Community
A hopeful dystopia, simple recipe, and ‘circuitous reunion’
October 2, 2025
2 min read
Professor of Afro-Latin American history recommends sights, tastes, and sounds of Argentina
Part of the
Favorite Things
series
Recommendations from Harvard faculty
Paulina Alberto is a Professor of African and African American Studies an
Paulina Alberto is a Professor of African and African American Studies and of History.
A TV show
“El Eternauta”
I’m not usually a fan of sci-fi, but this dystopian show about an alien invasion set in Buenos Aires, Argentina, transcends the genre. To Argentines (like me) who grew up seeing U.S. cities devastated by the magic of Hollywood, it’s a shocking thrill to see our capital city blanketed in the eerie, poisonous snow that announces the invasion. But the show’s message is hopeful: No one gets through this alone. An adaptation of a series of anti-authoritarian graphic novels first published in 1957, today its signature gas masks and motto of solidarity are taken up by Argentine scholars, scientists, and researchers defending their institutions from government attacks.
A recipe
Simple salads
When produce is bountiful (especially after unloading a farm share into our refrigerator), I like to return to the salads I grew up with. Go to a traditional neighborhood restaurant or grill in Buenos Aires and you’ll find a long list of salads with just one vegetable: carrot, tomato, beet, fennel, watercress, turnip, and so on. My father’s favorites were celery or onion (slice either of these paper-thin and soak in wine vinegar before adding plenty of salt and oil). Make at least four or five of these one-veg salads. They inevitably mingle on your plate, but each flavor shines in a way an “everything” salad just can’t replicate.
A song
“Tú ve” by Kevin Johansen and Natalia Lafourcade
This is a twofer — like asking the genie for one more wish: “Tú ve,” by two of my favorite recording artists, Kevin Johansen (Argentina) and Natalia Lafourcade (Mexico). It’s a song (built on a play on words) about separations, missed encounters, and circuitous reunion. Johansen and his band, La Nada, are eclectic and creative, playfully remixing Argentina’s pop, rock, and folklore traditions with genres and collaborators from across the Americas. Lafourcade makes luminous indie music grounded in the traditional styles of Mexico and especially her native Veracruz. You don’t need to understand Spanish to love their music.
Arts & Culture
Live fast, die young, inspire Shakespeare
Stephanie Mitchell/Harvard Staff Photographer
Max Larkin
Harvard Staff Writer
October 2, 2025
9 min read
Stephen Greenblatt finds a tragic strain in the life and work of Christopher Marlowe
Many years ago, Stephen Greenblatt tried to convince the writing team behind “Shakespeare in Love” that they were chasing the wrong Renaiss
Stephen Greenblatt finds a tragic strain in the life and work of Christopher Marlowe
Many years ago, Stephen Greenblatt tried to convince the writing team behind “Shakespeare in Love” that they were chasing the wrong Renaissance playwright — that the life of Christopher Marlowe, Shakespeare’s contemporary and rival, would make a better movie.
Decades later, Greenblatt has made his case with “Dark Renaissance,” a literary history of the knowns and unknowns of the life of Marlowe, killed at 29.
The book is not just a thrilling read — full of transgression and espionage — but also an argument for Marlowe’s literary significance: as much as anyone the inventor of the Elizabethan theater that Shakespeare would soon perfect, and for the tragic grandeur woven through his work, his own meteoric rise and squalid, murky death.
In an edited interview with the Gazette, Greenblatt, Cogan University Professor of the Humanities, explains what drew him irresistibly to Marlowe, and what Marlowe’s story can tell our time.
It’s important to set the scene. Your book offers a vivid picture of England in the 1560s and 1570s, when Marlowe — and Shakespeare — came of age. It is frankly a frightening place, with religious conflict between Protestants and Catholics, but also Protestants and more extreme Protestants. And it’s marked by violence, betrayal, and paranoia.
Yes. Society had split into warring parties, and the parties hated each other — it wasn’t just that they didn’t agree. People were getting killed. It must have been extremely difficult even to sit down with a big family at Christmas. You’d have to agree not to talk about a whole lot of issues.
And in addition to the internal divisions, which were tremendous, there were foreign armies threatening. The Spanish armada would sail in 1588, and Catholic powers on the continent were recruiting what we might call “terrorists”: training people to kill the queen.
And that atmosphere weighed on the culture. For years the English theater was either extremely crude or sort of arid and moralizing.
Well, yes: You just had all of these regime changes — Henry VIII; his Catholic daughter, Mary; her Protestant sister, Elizabeth — each of which was accompanied by executions. Many people understandably decided to keep their heads down.
You have to think of it as a society rather like contemporary Iran, where there is no shared public space, where — if you are the kind of person inclined to speak out, to say things that the authorities don’t approve of — you can get in tremendous trouble.
Marlowe seems to have been that kind of person. Even in his published work, there are challenges to the divine right of kings, as in “Tamburlaine,” or to God, as in “Faustus,” along with persistent elements of gore, sadism, eroticism.
Yes. We don’t just notice now, in 2025, that these are transgressive plays — he was noticed for it, and attacked for it, immediately in his own time.
And I think Marlowe would be interesting even if he was just a kind of troublemaker, or a wild freethinker in a time when that was dangerous. But what makes his story so compelling is that he was also an incredible genius. He wrote arguably the most beloved love poem of his time — “Come live with me and be my love,” everyone was singing it. Or “Hero and Leander,” a great, sexy poem. And theatrical blank verse: In “Tamburlaine,” Marlowe basically invented this astonishing new medium.
The meter, the form used by Shakespeare — who then eclipses him.
Who eclipses everyone. Shakespeare was the far greater artist, ultimately. People at the time understood that it was horrible to be imitated by Shakespeare: He would watch, absorb, digest, transform, and do what you do even better. Robert Greene, another contemporary writer, said of him, “This is an upstart Crow, beautified with our feathers.” But Shakespeare learned a ton from Marlowe — really a ton.
“People at the time understood that it was horrible to be imitated by Shakespeare: He would watch, absorb, digest, transform, and do what you do even better.”
The book is worth reading if only to see that Shakespeare didn’t spring out of nowhere: how he arose along with Thomas Kyd and Marlowe, in dialogue and competition with them. You say “Tamburlaine,” Marlowe’s first big success, sets the stage. It imagines this conqueror who comes from nothing and rolls over the kings of Asia Minor, a man who is brutal and godless and ambitious. And people loved it — Marlowe had to hurriedly write a second part.
It was just unlike anything that had come before. And that play helped get going, basically, the first mass entertainment industry in the modern world. Not the elegant private theaters that still existed in Italy, like in Vicenza, but this crazy thing — that brought together people who were exquisitely cultivated with pickpockets and whores, all in the same space together. Marlowe was the first person to figure that out.
This book is a natural companion to “Will in the World,” your 2004 narrative of Shakespeare’s creative life. Marlowe comes to seem like his dark twin.
I do see him that way. They were exact contemporaries. They came from similar backgrounds: both provincials, Marlowe the son of a cobbler, Shakespeare the son of a glovemaker. Their paths were different, but they both found their way to London, and to theater, at the same time.
For years, many scholars believed it possible that they never met each other. That always seemed to me wildly unlikely. Today, most scholars believe that in fact the two collaborated on plays, including the three parts of “Henry VI” — that they were in a writers’ room together.
You have a scene imagining what it was like in that room, and it’s sort of funny. Shakespeare ends up seeming careerist, cagey, sort of “square” next to Marlowe, who was a Cambridge graduate, a reputed atheist, an acquaintance to the biggest names in his world, and who died violently at just 29. He was also, notoriously, some kind of a spy for the Elizabethan regime — the source of much speculation.
Yes. We know that the Privy Council — really the most important people in the country — intervened to get Marlowe his master’s degree from Cambridge when it was being withheld. And what they said was that he “had done her Majesty good service and deserved to be rewarded.”
This is a world in which people are listening to each other, drawing out people’s secrets and reporting them to the authorities. I’m careful not to say what exactly Marlowe did, because I don’t know exactly what he did. We can only speculate, from his being a simple courier to something more sinister. But that “good service” letter — it’s not as if there are 500 more such letters around. I can’t think of another one. So I tend to think he wasn’t just a courier.
A 1585 portrait thought to be of Christopher Marlowe.
Wikimedia Commons
Let’s talk about his mysterious death. After a long day with some very unseemly associates, Marlowe was fatally stabbed in the eye, age 29, in 1593. His tablemates tell the authorities that it was a fight about the “recknynge” — the bill. You’re not convinced.
It’s just strange. Among the three people he was with was Robert Poley, a kind of career spy and one of the scariest people you could ever bump up against in the late 16th century. He got his hands in very bloody and sinister things, including the entrapment of Mary, Queen of Scots — though not just that.
And then we learn that a full account of the terrible, heretical things Marlowe was supposedly saying was copied and given to the queen. That’s unusual: She didn’t look at every accusation. And then it’s reported that the queen’s response was to “prosecute it to the full.” As usual, it’s murky — “prosecute it to the full” doesn’t mean “stab Marlowe to death in an inn.” But someone could easily have interpreted it as that kind of official encouragement.
Let’s close on that point. Christopher Marlowe, it seems to me, anticipates a familiar artistic type: the live-fast-die-young sort. If he had a kind of death wish, you could argue he almost wrote it into his greatest play, “Faustus,” about the doctor’s famous deal with the devil. In this version, Faustus gives up his soul for 24 years of boundless power.
Yes. There’s something intensely personal about Marlowe’s representation of Dr. Faustus. And it’s odd: In the deal, it isn’t the devil who proposes the 24 years. Faustus comes up with that. It’s as if some part of Marlowe was interested in the idea of knowing your end was not so far off.
So maybe Marlowe knew he couldn’t last forever. He ends up being a very moving character, given the little we know about him — and acknowledging that Shakespeare sort of left him in his dust.
Yes. And we don’t know what Marlowe might have become. If Shakespeare had died at 29, we’d have, what? “Two Gentlemen of Verona,” the Henry VI plays, and “Titus Andronicus”? It’s just not that interesting a career. Nearly all of his great work lay ahead of him, at that age.
And Shakespeare appeared to be as conscious of Marlowe as any contemporary writer. Marlowe was the only one he quoted directly in one of his plays. He watched Marlowe write “Edward II”; he wrote “Richard II.” Marlowe wrote “The Jew of Malta”; Shakespeare wrote “The Merchant of Venice.” Marlowe wrote “Tamburlaine” and he wrote “Titus Andronicus” — as if to say, “You want blood, I’ll give you blood!”
And then you’re clear about the trail Marlowe helped blaze, to popularize the theater, blend high and low sensibilities, write poetry for the stage, and maybe how to smuggle cultural critique into popular entertainment.
Yeah. Marlowe seemed to see a kind of wall built around him, in his time. He wanted to get out but there was no door, so he just took this hammer and smashed a hole in the wall — I’m putting it crudely. And then Shakespeare basically walked over his dead body, through the hole that he had made.
Amitabh Chandra. Veasey Conway/Harvard Staff Photographer
Health
Corporatization of healthcare gets too much of a bad rap, analyst says
For-profits, private equity can boost innovation, growth, care, according to co-author of new paper. But gains need to be aligned with patient outcomes.
Christina Pazzanese
Gazette Staff Writer
October 2, 2025
8 min read
Frustrated by the relentless rise
Corporatization of healthcare gets too much of a bad rap, analyst says
For-profits, private equity can boost innovation, growth, care, according to co-author of new paper. But gains need to be aligned with patient outcomes.
This growing trend in the U.S. is known as corporatization. Investors supply much-needed funding to pharmaceutical and biomedical companies, healthcare institutions, and physicians to help pay for drug development, meet escalating expenses, and increase efficiency and scale. But too often, critics say, the push for profit ends up leaving patients with reduced quality and choice, and ever-surging costs.
In a new paper in the New England Journal of Medicine, co-author Amitabh Chandra argues private investment in the healthcare system fills a critical need that others, like the federal government and nonprofits, simply cannot.
In this edited conversation, Chandra, who is director of the Malcolm Wiener Center for Public Policy at Harvard Kennedy School and the Henry and Allison McCance Family Professor of Business Administration at Harvard Business School, said contrary to popular opinion, profit-seeking in healthcare doesn’t necessarily mean that patients will be worse off.
What is corporatization?
Corporatization is essentially a deal between a medical organization and investors. The organization receives capital that can be used for new technologies, upgraded facilities, research, or competitive salaries.
In return, investors expect a share of the profits. The share might be small or large — 1 percent, 10 percent, or even 50 percent — depending on the terms of the agreement. At its core, corporatization “unlocks” money for growth, but it does so in a way that prioritizes profits, since investors can always move their funds elsewhere if returns are lacking
You say private investment in healthcare isn’t necessarily bad. What are some of the benefits besides an infusion of cash?
The deal between an investor and a medical organization is voluntary, so it clearly benefits those two parties. But the real question is whether it benefits society — and that’s not always obvious.
The key measure is what happens to patient outcomes when corporatization occurs. In some areas, such as nursing homes, the record is poor. Here, some private equity firm owners may cut staffing and reduce quality to boost profits, which has been linked to higher patient mortality.
“The key measure is what happens to patient outcomes when corporatization occurs.”
But in other areas, corporatization has delivered real benefits. In vitro fertilization (IVF) is one example. Because IVF is capital-intensive, larger corporate networks can use scale, data, and investment in technology to improve success rates. Patients benefit because quality is measurable (pregnancy rates), and clinics compete directly on outcomes and price.
Similarly, in the biopharmaceutical industry, private investment has been indispensable for funding the huge costs of drug development, enabling the creation of treatments that otherwise wouldn’t exist.
So, the benefits of corporatization beyond just “more money” depend on whether the investment is used to expand scale, improve processes, or foster innovation in ways that actually improve patient outcomes.
Because healthcare and scientific R&D are so expensive, isn’t tension between market incentives and health outcomes, between patients and profits, inevitable?
I don’t think so. The “inevitable tension” view assumes that anytime profits are involved, patients’ well-being is compromised. But imagine a world without profits — would patients automatically be better off? The answer is no.
Much of healthcare depends on improving quality and driving innovation: treating a heart attack patient better this year than last, developing new medicines, or adopting better technologies. All of that requires capital, and profits are what attract that capital.
It’s true that people worry — often rightly — about the excesses of for-profit entities. But from that, some conclude that the very presence of profit must harm patients. That’s a mistake.
Even the solo physician in private practice is seeking profit, and without profit it will shutter. So it’s naïve to say that corporations making profits are bad, but individuals making profits is fine.
The real challenge isn’t profit itself, but how well we align profits with value for patients. In sectors like IVF or biopharmaceuticals, profits and patient outcomes can reinforce each other. In others, like nursing homes, misaligned incentives can lead to harm.
In your view, why has corporatization been helpful in some sectors but not in others, like nursing homes?
A big part of the answer is how observable quality is. Take IVF clinics: Their promise is straightforward — fertility. Patients can easily see whether treatment leads to pregnancy, and clinics compete directly on success rates and cost.
Pharmaceuticals are more complex, but here quality is backed up by regulation. Patients may not be able to evaluate a drug on their own, but FDA approval signals that it is safe and effective, and physicians act as trusted intermediaries.
That combination of regulation and professional oversight helps align profits with patient outcomes, which is why corporatization has supported innovation in pharma.
“That combination of regulation and professional oversight helps align profits with patient outcomes, which is why corporatization has supported innovation in pharma.”
By contrast, nursing homes lack clear, trusted quality measures. Families struggle to assess the quality of day-to-day care, and regulators are relatively weak. This creates space for profit-driven owners — especially private equity firms — to cut staffing and reduce quality, even in ways that increase patient mortality.
Without reliable measures or enforcement, corporatization in this sector tends to harm rather than help patients.
Isn’t the federal government better-suited than private equity to fund this kind of work, especially research and development, which often requires a huge investment over a long period of time with no guarantee that a product is going to work, get approved by regulators, or be successful in a crowded marketplace?
No. Governments, including the U.S., have shown themselves to be poor at sustaining long-term investments.
The NIH budget is about $35 billion, which is crucial for supporting basic science, but it’s small compared to what’s needed. By contrast, the pharmaceutical industry invests around $275 billion globally each year in R&D. That scale of spending is far beyond what governments, nonprofits and foundations are willing or able to sustain.
Without private capital the massive, high-risk clinical trials and product development that bring new treatments to patients simply wouldn’t happen
Government funding also comes with bureaucratic hurdles, shifting priorities, and budgetary uncertainty — not a good recipe for the steady, long-term investment required for drug development. Where government plays an essential role is in early-stage research, creating the scientific foundation.
Here too, as the current stoppage of NIH grants illustrates, it struggles to provide smooth funding, which is a prerequisite for producing great science. To be clear, it’s not just the U.S. government that is bad at long-term spending on science. The governments of many rich countries have a substantially worse record.
Why is that happening?
It’s ultimately a choice. Ideally, government should spend more because the benefits of basic science research accrue to society as a whole. Take Alzheimer’s disease: Developing a truly transformational treatment may take 30, 40, even 50 years. That kind of research horizon isn’t attractive to private investors, so government has to play a larger role in financing the early-stage science.
And in fact, despite its shortcomings, the U.S. government is the world’s largest funder of basic biomedical research. The problem is that governments everywhere face structural limits: They are not well-suited for long-term commitments that don’t yield visible benefits to voters in the short run. Other countries often free-ride on U.S. investments, making the challenge even greater.
What steps can be taken to reap the benefits of for-profit investment while minimizing some of the negative outcomes that can arise from a desire to find profit?
Profit-seeking is not the problem itself. Even nonprofits generate profits; they just don’t pay taxes on them. A system without profits wouldn’t automatically make patients better off; in fact, it would shrink the scale of care and stifle innovation. The real challenge is ensuring that profits are aligned with value for patients.
“The real challenge is ensuring that profits are aligned with value for patients.”
The most important step is to strengthen regulation. Right now, regulators in healthcare are under-resourced and often unable to oversee complex deals or prevent abuses. A well-resourced and independent regulator is critical to making corporatization work well. The FDA is a good example: It provides trusted, science-based approval of drugs, which helps align corporate incentives with patient outcomes.
We need similar capacity elsewhere in healthcare — regulators at the Centers for Medicare and Medicaid Services, the Federal Trade Commission, and the Department of Justice that are insulated from political interference and independent of the industries they oversee.
With better-quality measurement, enforcement of antitrust rules, and the authority to stop or unwind deals that don’t generate value for society, regulation can help ensure that corporate investment expands access, improves quality, and drives innovation without sacrificing patient well-being.
Investigators at Weill Cornell Medicine and Cornell’s Ithaca campus will use a $5.1 million grant from the NIH to launch the Autism Replication, Validation, and Reproducibility Center, which aims to improve the reliability of autism research.
Investigators at Weill Cornell Medicine and Cornell’s Ithaca campus will use a $5.1 million grant from the NIH to launch the Autism Replication, Validation, and Reproducibility Center, which aims to improve the reliability of autism research.
If you have ever cooked on a gas stove or seen a flame flicker to life with the turn of a knob, you have seen natural gas in action. Supplying that energy at scale, however, is far more complicated. Today, natural gas is mostly stored under high pressure or cooled into liquid at -162 °C — both methods that are energy-intensive and costly. An alternative approach, called solidified natural gas, locks methane inside an ice-like cage known as a hydrate. But in practice, these hydrates usually form
If you have ever cooked on a gas stove or seen a flame flicker to life with the turn of a knob, you have seen natural gas in action. Supplying that energy at scale, however, is far more complicated. Today, natural gas is mostly stored under high pressure or cooled into liquid at -162 °C — both methods that are energy-intensive and costly. An alternative approach, called solidified natural gas, locks methane inside an ice-like cage known as a hydrate. But in practice, these hydrates usually form far too slowly to be practical on a larger scale.
Researchers led by Professor Praveen Linga from the Department of Chemical and Biomolecular Engineering at the College of Design and Engineering, National University of Singapore, have found a simple workaround by adding amino acids — the building blocks of proteins. In a new study ‘Rapid conversion of amino acid modified-ice to methane hydrate for sustainable energy storage’ published in Nature Communications, the researchers showed that freezing water with a small amount of these naturally occurring compounds produces an “amino-acid-modified ice” that locks in methane gas in minutes. In tests, the material reached 90 per cent of its storage capacity in just over two minutes, compared with hours for conventional systems.
The method also brings environmental benefits. Because amino acids are biodegradable, the method averts the environmental risks posed by surfactants often used to speed up hydrate formation. It also allows methane to be released on demand with gentle heating, after which the ice can be refrozen and reused, creating a closed-loop storage cycle. This combination of performance and sustainability makes the approach attractive for large-scale natural gas storage as well as for smaller, renewable sources of biomethane. The team also sees potential for adapting the technique to store other gas, including carbon dioxide and hydrogen.
Faster hydrates with a biological twist
The concept behind the new material is highly effective yet elegantly simple: mix water with amino acids, freeze it and then expose the ice to methane gas. In the lab, this amino-acid-modified ice quickly transformed into a white, expanded solid — evidence that methane had been locked inside as hydrate. Within just over two minutes, the material stored 30 times more methane than plain ice could hold.
This is possible because amino acids change the surface properties of the ice. Hydrophobic amino acids such as tryptophan encourage the formation of tiny liquid layers on the ice surface as methane is injected. These layers act as fertile ground for hydrate crystals to grow, producing a porous, sponge-like structure that speeds up gas capture. By contrast, plain ice tends to form a dense outer film that blocks further methane from diffusing inward, slowing the process dramatically.
To probe what was happening at the molecular level, the team turned to Raman spectroscopy, a technique that tracks how light scatters from vibrating molecules. These experiments showed methane rapidly filling two types of microscopic cages inside the hydrate structure, with occupancies above 90 per cent. “This gives us direct evidence that the amino acids are not only speeding up the process but also allowing methane to pack efficiently into the hydrate cages,” said Dr Ye Zhang, the lead author of the paper, a Research Fellow from the Department of Chemical and Biomolecular Engineering.
The team also tested different amino acids and found a clear pattern. Notably, hydrophobic ones like methionine and leucine worked well, while hydrophilic ones such as histidine and arginine did not. This “design rule,” Prof Linga said, could guide future efforts to tailor ice surfaces for gas storage.
From lab results to energy storage cycles
The researchers’ work is still at the proof-of-concept stage, but the performance of the modified ice is very promising. At near-freezing temperatures and moderate pressures, the amino acid ice outperformed some of the most advanced porous materials, including metal-organic frameworks and zeolites, used for storing natural gas — not only in how much methane it could hold, but also in how quickly it filled. And unlike surfactant-based systems, it did not produce foaming during gas release, which is a major hurdle for large-scale operation.
Equally important is the ability to empty and reuse the system. By gently warming the hydrate, the team could recover all the stored methane. The leftover solution could then be frozen again to form fresh amino acid modified ice, setting up a repeatable ‘charge–discharge’ cycle reminiscent of how batteries store and release energy.
Reusability and sustainability make the method appealing for handling smaller, distributed supplies of renewable biomethane, which are often too modest in scale to justify expensive liquefaction or high-pressure storage facilities. The team is also exploring how to scale up the process for larger systems, including reactor designs that maintain efficient gas–liquid–solid contact, as well as tests with natural gas mixtures containing methane, ethane and propane. Other directions include improving hydrate stability through amino acid-engineered composite systems, and eventually adapting the method for gases such as carbon dioxide and hydrogen.
“Natural gas and biomethane are important components in the energy mix today, but their storage and transport have long relied on methods that are either costly or carbon-intensive,” added Prof Linga. “What we are showing is a simple, biodegradable pathway that can both work quickly and be reused. It makes gas storage safer, greener and more adaptable.”
Crocodile Foundation Ltd (CFL) has made a gift of S$1 million to NUS, in support of its Communities and Engagement (C&E) programme, to advance service-learning initiatives that benefit seniors and vulnerable families. This was announced during the Foundation’s 10th anniversary celebrations which was held on 1 September 2025.Professor Peter Ho, NUS Vice Provost (Undergraduate Studies & Technology-Enhanced Learning), and other CFL beneficiaries attended the 10th anniversary celebrations.C&
Crocodile Foundation Ltd (CFL) has made a gift of S$1 million to NUS, in support of its Communities and Engagement (C&E) programme, to advance service-learning initiatives that benefit seniors and vulnerable families. This was announced during the Foundation’s 10th anniversary celebrations which was held on 1 September 2025.
Professor Peter Ho, NUS Vice Provost (Undergraduate Studies & Technology-Enhanced Learning), and other CFL beneficiaries attended the 10th anniversary celebrations.
C&E courses—which are offered as part of the University’s General Education curriculum for undergraduates—enable students to reflect deeply and take constructive action to address societal needs and tackle real-world challenges such as inequality and poverty.
Supporting change, serving communities
CFL’s gift will support the NUS C&E programme, which includes service-learning courses such as GEN2060 Reconnect SeniorsSG; GEN2061 Support Healthy AgeingSG; GEN2062 Community Activities for Seniors with SG Cares; and GEN2070 Community Link (ComLink) Befrienders. Since the inception of the C&E Pillar in 2021, students have collectively contributed over 200,000 service-learning hours, demonstrating the Pillar’s impact in promoting student volunteering efforts in Singapore. Through these service-learning courses, NUS students engage with societal issues and apply what they learnt from the classroom in meaningful volunteer service.
Forging a powerful partnership
This inaugural gift from CFL marks the start of a philanthropic partnership with NUS and underscores the Foundation’s commitment to lasting societal impact. Established by the late Dato’ Dr Tan Hian Tsin and To’ Puan Dr Tsao Sui Lan on 15 September 2015, CFL has generously supported several charitable organisations in Singapore and around the world. The Foundation’s philanthropic contributions are guided by their dedication to education, healthcare and the well-being of society.
“The objectives of the C&E pillar align very closely with the late Dato’ Dr Tan and To’ Puan Dr Tsao’s vision of helping the marginalised in our society,” explained Ms Pearlyn Ng, Chairperson of Crocodile Foundation. “We felt it was a perfect fit for our Foundation, and a meaningful way to commemorate 10 years of making a difference. We hope that students taking these service-learning courses will be able to bring meaningful impact to seniors and vulnerable families while also learning valuable lessons about compassion and empathy.”
CFL’s gift will enable scalable, sustainable, and continuous support for socially vulnerable seniors in Singapore to age with dignity and good health, while also enhancing the social mobility of disadvantaged families. The gift will fund student volunteer work, workshops and activities under the C&E Pillar, encouraging greater participation among students enrolled in the four C&E courses supported by the gift.
Prof Ho shared that the C&E Pillar has been making a tremendous impact. “Our C&E service-learning programmes are designed to foster deep connections between our students and the community.” said Prof Ho. “Through meaningful befriending and innovative community projects, our students don’t just learn about societal issues—they contribute to sustainable transformation of the volunteer landscape. We are deeply grateful for the far-sighted support of the Crocodile Foundation, which will enable our students to become compassionate agents of change.”
NUS President Professor Tan Eng Chye said, “We would like to extend our sincere appreciation to Crocodile Foundation for its generous gift, which will enable seniors and vulnerable families to have greater access to the care and support they need. At the same time, our students will acquire valuable experiences that empower them to become compassionate and socially responsible contributors to society. This virtuous cycle will, in turn, create enduring positive change towards a more caring society.”
NUS’ 120th anniversary coincides with Crocodile Foundation’s 10th anniversary this year, a testament to our shared vision and commitment to positively impact the local community through service.
AI chatbots are taking over writing for students and researchers. Linguist Giorgio Iemmolo explains what we are jeopardising in the interest of efficiency gains.
AI chatbots are taking over writing for students and researchers. Linguist Giorgio Iemmolo explains what we are jeopardising in the interest of efficiency gains.
Designing a complex electronic device like a delivery drone involves juggling many choices, such as selecting motors and batteries that minimize cost while maximizing the payload the drone can carry or the distance it can travel.Unraveling that conundrum is no easy task, but what happens if the designers don’t know the exact specifications of each battery and motor? On top of that, the real-world performance of these components will likely be affected by unpredictable factors, like changing weat
Designing a complex electronic device like a delivery drone involves juggling many choices, such as selecting motors and batteries that minimize cost while maximizing the payload the drone can carry or the distance it can travel.
Unraveling that conundrum is no easy task, but what happens if the designers don’t know the exact specifications of each battery and motor? On top of that, the real-world performance of these components will likely be affected by unpredictable factors, like changing weather along the drone’s route.
MIT researchers developed a new framework that helps engineers design complex systems in a way that explicitly accounts for such uncertainty. The framework allows them to model the performance tradeoffs of a device with many interconnected parts, each of which could behave in unpredictable ways.
Their technique captures the likelihood of many outcomes and tradeoffs, giving designers more information than many existing approaches which, at most, can usually only model best-case and worst-case scenarios.
Ultimately, this framework could help engineers develop complex systems like autonomous vehicles, commercial aircraft, or even regional transportation networks that are more robust and reliable in the face of real-world unpredictability.
“In practice, the components in a device never behave exactly like you think they will. If someone has a sensor whose performance is uncertain, and an algorithm that is uncertain, and the design of a robot that is also uncertain, now they have a way to mix all these uncertainties together so they can come up with a better design,” says Gioele Zardini, the Rudge and Nancy Allen Assistant Professor of Civil and Environmental Engineering at MIT, a principal investigator in the Laboratory for Information and Decision Systems (LIDS), an affiliate faculty with the Institute for Data, Systems, and Society (IDSS), and senior author of a paper on this framework.
Zardini is joined on the paper by lead author Yujun Huang, an MIT graduate student; and Marius Furter, a graduate student at the University of Zurich. The research will be presented at the IEEE Conference on Decision and Control.
Considering uncertainty
The Zardini Group studies co-design, a method for designing systems made of many interconnected components, from robots to regional transportation networks.
The co-design language breaks a complex problem into a series of boxes, each representing one component, that can be combined in different ways to maximize outcomes or minimize costs. This allows engineers to solve complex problems in a feasible amount of time.
In prior work, the researchers modeled each co-design component without considering uncertainty. For instance, the performance of each sensor the designers could choose for a drone was fixed.
But engineers often don’t know the exact performance specifications of each sensor, and even if they do, it is unlikely the senor will perfectly follow its spec sheet. At the same time, they don’t know how each sensor will behave once integrated into a complex device, or how performance will be affected by unpredictable factors like weather.
“With our method, even if you are unsure what the specifications of your sensor will be, you can still design the robot to maximize the outcome you care about,” says Furter.
To accomplish this, the researchers incorporated this notion of uncertainty into an existing framework based on category theory.
Using some mathematical tricks, they simplified the problem into a more general structure. This allows them to use the tools of category theory to solve co-design problems in a way that considers a range of uncertain outcomes.
By reformulating the problem, the researchers can capture how multiple design choices affect one another even when their individual performance is uncertain.
This approach is also simpler than many existing tools that typically require extensive domain expertise. With their plug-and-play system, one can rearrange the components in the system without violating any mathematical constraints.
And because no specific domain expertise is required, the framework could be used by a multidisciplinary team where each member designs one component of a larger system.
“Designing an entire UAV isn’t feasible for just one person, but designing a component of a UAV is. By providing the framework for how these components work together in a way that considers uncertainty, we’ve made it easier for people to evaluate the performance of the entire UAV system,” Huang says.
More detailed information
The researchers used this new approach to choose perception systems and batteries for a drone that would maximize its payload while minimizing its lifetime cost and weight.
While each perception system may offer a different detection accuracy under varying weather conditions, the designer doesn’t know exactly how its performance will fluctuate. This new system allows the designer to take these uncertainties into consideration when thinking about the drone’s overall performance.
And unlike other approaches, their framework reveals distinct advantages of each battery technology.
For instance, their results show that at lower payloads, nickel-metal hydride batteries provide the lowest expected lifetime cost. This insight would be impossible to fully capture without accounting for uncertainty, Zardini says.
While another method might only be able to show the best-case and worst-case performance scenarios of lithium polymer batteries, their framework gives the user more detailed information.
For example, it shows that if the drone’s payload is 1,750 grams, there is a 12.8 percent chance the battery design would be infeasible.
“Our system provides the tradeoffs, and then the user can reason about the design,” he adds.
In the future, the researchers want to improve the computational efficiency of their problem-solving algorithms. They also want to extend this approach to situations where a system is designed by multiple parties that are collaborative and competitive, like a transportation network in which rail companies operate using the same infrastructure.
“As the complexity of systems grow, and involves more disparate components, we need a formal framework in which to design these systems. This paper presents a way to compose large systems from modular components, understand design trade-offs, and importantly do so with a notion of uncertainty. This creates an opportunity to formalize the design of large-scale systems with learning-enabled components,” says Aaron Ames, the Bren Professor of Mechanical and Civil Engineering, Control and Dynamical Systems, and Aerospace at Caltech, who was not involved with this research.
MIT researchers developed a framework that can help engineers design systems that involve many interconnected parts in a way that explicitly accounts for the uncertainty in each component’s performance.
Scholars have long been fascinated by the Chinese diaspora and the richness of Chinese civilisation – exploring how migration shapes identity and belonging, and how Chinese philosophy has shaped China’s society and cultural life. Few intellectuals, however, have illuminated these complexities with greater depth and clarity than NUS University Professor and eminent historian Professor Wang Gungwu.In recognition of his groundbreaking work on Chinese history, particularly focusing on imperial China
Scholars have long been fascinated by the Chinese diaspora and the richness of Chinese civilisation – exploring how migration shapes identity and belonging, and how Chinese philosophy has shaped China’s society and cultural life. Few intellectuals, however, have illuminated these complexities with greater depth and clarity than NUS University Professor and eminent historian Professor Wang Gungwu.
In recognition of his groundbreaking work on Chinese history, particularly focusing on imperial China, the relationship between China and Southeast Asia, and the evolving identities of Chinese communities in Southeast Asia, Prof Wang was awarded the Tang Prize in Sinology in 2020.
To showcase the achievements of its laureates to a wider population, the Tang Prize Foundation produces documentary films that explore the life, work and impact of each prize winner. In the South, Thinking China: From Chinese History to Nanyang Identity is a new documentary that delves into Prof Wang’s remarkable life and academic journey. The film held screenings in September at the University of Malaya and NUS.
Reflections on family, displacement and history
Filmed across Singapore and Malaysia, the 47-minute documentary traces Prof Wang's life and exploration of identity among diasporic Chinese. It presents his analysis of China’s internal historical dynamics and its interactions with its southern neighbours, offering a comprehensive understanding of the Chinese people's position in the international community.
To inspire the public with his remarkable achievements, the film sought to portray Prof Wang’s life and trailblazing scholarship in an accessible way, said Dr Chern Jenn-Chuan, Chief Executive Officer of the Tang Prize Foundation, who attended the documentary’s premiere at NUS. The event was hosted by the Faculty of Arts and Social Sciences on 23 September 2025.
Thanking the Foundation for its moving portrayal of his life’s work, Prof Wang said: “The film is not a story about scholarship or history. It’s really a story of people who, wherever they are, have questions about where home is.”
This universal question has, in fact, shaped much of his scholarship and perspective on his personal experiences. The idea of “home”, he observed, is closely linked to the concept of “family”, which has long been placed at the heart of the Chinese civilisation’s moral and social order. Unlike a nation or an empire, a family is rooted in kinship. Over time, however, the family expanded into moral, social and political hierarchies. What began as a natural bond of belonging grew into complex structures of duty, power and control.
Prof Wang’s reflections on home and family were also shaped by his own experiences of displacement. Born outside China, he and his late wife, Mrs Margaret Wang, spent much of their lives abroad in countries like Malaysia, Australia and the UK. Paradoxically, this distance from their ancestral homeland allowed them to redefine “home” on their own terms. These reflections were also captured in memoirs titled Home Is Not Here and Home Is Where We Arepublished by NUS Pressin 2018 and 2020.
In response to a question from the audience, Prof Wang stressed that history should not be viewed purely as an academic discipline but as a lens to the past and its relevance to the present. The Chinese, he added, are not simply interested in the past for its own sake but seek to document the past to guide future generations on morality, governance and society.
He believes that belonging and identity are ultimately found in the stories of families, migrants and civilisations. “Everybody who left home anytime, anywhere, has a story to tell, and I dearly wish to hear more of them”, he said, observing that in sharing such stories, we rediscover what home truly means.
After receiving the Tang Prize in 2020, Prof Wang generously donated his prize money (a NT$10 million research grant) to establish the Margaret Wang Memorial Master’s Scholarship, in memory of his late wife. Through this scholarship, Prof Wang hopes to encourage research on literature – specifically the works of Southeast Asian writers who write in Chinese or English – as well as scholarship in Sinology. He has also established the separate Margaret Wang Master’s Scholarship in Literature with his personal funds.
The documentary, originally produced in Chinese, is available to the public on YouTube. The Foundation is working on releasing an English version soon.
Mostafa Fawzy became interested in physics in high school. It was the “elegance and paradox” of quantum theory that got his attention and led to his studies at the undergraduate and graduate level. But even with a solid foundation of coursework and supportive mentors, Fawzy wanted more. MIT Open Learning’s OpenCourseWare was just the thing he was looking for. Now a doctoral candidate in atomic physics at Alexandria University and an assistant lecturer of physics at Alamein International Univers
Mostafa Fawzy became interested in physics in high school. It was the “elegance and paradox” of quantum theory that got his attention and led to his studies at the undergraduate and graduate level. But even with a solid foundation of coursework and supportive mentors, Fawzy wanted more. MIT Open Learning’s OpenCourseWare was just the thing he was looking for.
Now a doctoral candidate in atomic physics at Alexandria University and an assistant lecturer of physics at Alamein International University in Egypt, Fawzy reflects on how MIT OpenCourseWare bolstered his learning early in his graduate studies in 2019.
Part of MIT Open Learning, OpenCourseWare offers free, online, open educational resources from more than 2,500 courses that span the MIT undergraduate and graduate curriculum. Fawzy was looking for advanced resources to supplement his research in quantum mechanics and theoretical physics, and he was immediately struck by the quality, accessibility, and breadth of MIT’s resources.
“OpenCourseWare was transformative in deepening my understanding of advanced physics,” Fawzy says. “I found the structured lectures and assignments in quantum physics particularly valuable. They enhanced both my theoretical insight and practical problem-solving skills — skills I later applied in research on atomic systems influenced by magnetic fields and plasma environments.”
He completed educational resources including Quantum Physics I and Quantum Physics II, calling them “dense and mathematically sophisticated.” He met the challenge by engaging with the content in different ways: first, by simply listening to lectures, then by taking detailed notes, and finally by working though problem sets. Although initially he struggled to keep up, this methodical approach paid off, he says.
Fawzy is now in the final stages of his doctoral research on high-precision atomic calculations under extreme conditions. While in graduate school, he has published eight peer-reviewed international research papers, making him one of the most prolific doctoral researchers in physics working in Egypt currently. He served as an ambassador for the United Nations International Youth Conference (IYC), and he was nominated for both the African Presidential Leadership Program and the Davisson–Germer Prize in Atomic or Surface Physics, a prestigious annual prize offered by the American Physical Society.
He is grateful to his undergraduate mentors, professors M. Sakr and T. Bahy of Alexandria University, as well as to MIT OpenCourseWare, calling it a “steadfast companion through countless solitary nights of study, a beacon in times when formal resources were scarce, and a living testament to the nobility of open, unbounded learning.”
Recognizing the power of mentorship and teaching, Fawzy serves as an academic mentor with the African Academy of Sciences, supporting early-career researchers across the continent in theoretical and atomic physics.
“Many of these mentees lack access to advanced academic resources,” he explains. “I regularly incorporate OpenCourseWare into our mentorship sessions, using it as a foundational teaching and reference tool. It’s an equalizer, providing the same high-caliber content to students regardless of geographical or institutional limitations.”
As he looks toward the future, Fawzy has big plans, influenced by MIT.
“I aspire to establish a regional center for excellence in atomic and plasma physics, blending cutting-edge research with open-access education in the Global South,” he says.
As he continues his research and teaching, he also hopes to influence science policy and contribute to international partnerships that shine the spotlight on research and science in emerging nations.
Along the way, he says, “OpenCourseWare remains a cornerstone resource that I will return to again and again.”
Fawzy says he’s also interested in MIT Open Learning resources in computational physics and energy and sustainability. He’s following MIT’s Energy Initiative, calling it increasingly relevant to his current work and future plans.
Fawzy is a proponent of open learning and a testament to its power.
“The intellectual seeds sown by Open Learning resources such as MIT OpenCourseWare have flourished within me, shaping my identity as a physicist and affirming my deep belief in the transformative power of knowledge shared freely, without barriers,” he says.
“The intellectual seeds sown by Open Learning resources such as MIT OpenCourseWare have flourished within me, shaping my identity as a physicist and affirming my deep belief in the transformative power of knowledge shared freely, without barriers,” says Mostafa Fawzy, who has been using MIT's open educational resources since he was in high school.
Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”A
Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”
As MIT researchers report in a new PNAS paper, optimized electrolytes and manufacturing processes have increased the energy storage capacity of the latest ec3 supercapacitors by an order of magnitude. In 2023, storing enough energy to meet the daily needs of the average home would have required about 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement. Now, with the improved electrolyte, that same task can be achieved with about 5 cubic meters, the volume of a typical basement wall.
“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration. Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?” asks Admir Masic, lead author of the new study, MIT Electron-Conducting Carbon-Cement-Based Materials Hub (EC³ Hub) co-director, and associate professor of civil and environmental engineering (CEE) at MIT.
The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.
“Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.
Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”
At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.
The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3 — about the size of a refrigerator — can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.
While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements — from slabs and walls to domes and vaults — and last as long as the structure itself.
“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.
Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.
However, something unique happened when the load on the arch increased: the light flickered. This is likely due to the way stress impacts electrical contacts or the distribution of charges. “There may be a kind of self-monitoring capacity here. If we think of an ec3 arch at architectural scale, its output may fluctuate when it’s impacted by a stressor like high winds. We may be able to use this as a signal of when and to what extent a structure is stressed, or monitor its overall health in real time,” envisions Masic.
The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting. “With these higher energy densities and demonstrated value across a broader application space, we now have a powerful and flexible tool that can help us address a wide range of persistent energy challenges,” explains Stefaniuk. “One of our biggest motivations was to help enable the renewable energy transition. Solar power, for example, has come a long way in terms of efficiency. However, it can only generate power when there’s enough sunlight. So, the question becomes: How do you meet your energy needs at night, or on cloudy days?”
Franz-Josef Ulm, EC³ Hub co-director and CEE professor, continues the thread: “The answer is that you need a way to store and release energy. This has usually meant a battery, which often relies on scarce or harmful materials. We believe that ec3 is a viable substitute, letting our buildings and infrastructure meet our energy storage needs.” The team is working toward applications like parking spaces and roads that could charge electric vehicles, as well as homes that can operate fully off the grid.
“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.”
An electron-conducting carbon concrete (ec³)-based arch structure integrates supercapacitor electrodes for dual functionality. The prototype demonstrates both structural load bearing and the ability to power an LED, with the light’s intensity varying under applied load, highlighting the potential for real-time structural health monitoring via the supercapacitor.
Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”A
Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”
As MIT researchers report in a new PNAS paper, optimized electrolytes and manufacturing processes have increased the energy storage capacity of the latest ec3 supercapacitors by an order of magnitude. In 2023, storing enough energy to meet the daily needs of the average home would have required about 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement. Now, with the improved electrolyte, that same task can be achieved with about 5 cubic meters, the volume of a typical basement wall.
“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration. Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?” asks Admir Masic, lead author of the new study, MIT Electron-Conducting Carbon-Cement-Based Materials Hub (EC³ Hub) co-director, and associate professor of civil and environmental engineering (CEE) at MIT.
The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.
“Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.
Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”
At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.
The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3 — about the size of a refrigerator — can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.
While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements — from slabs and walls to domes and vaults — and last as long as the structure itself.
“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.
Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.
However, something unique happened when the load on the arch increased: the light flickered. This is likely due to the way stress impacts electrical contacts or the distribution of charges. “There may be a kind of self-monitoring capacity here. If we think of an ec3 arch at architectural scale, its output may fluctuate when it’s impacted by a stressor like high winds. We may be able to use this as a signal of when and to what extent a structure is stressed, or monitor its overall health in real time,” envisions Masic.
The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting. “With these higher energy densities and demonstrated value across a broader application space, we now have a powerful and flexible tool that can help us address a wide range of persistent energy challenges,” explains Stefaniuk. “One of our biggest motivations was to help enable the renewable energy transition. Solar power, for example, has come a long way in terms of efficiency. However, it can only generate power when there’s enough sunlight. So, the question becomes: How do you meet your energy needs at night, or on cloudy days?”
Franz-Josef Ulm, EC³ Hub co-director and CEE professor, continues the thread: “The answer is that you need a way to store and release energy. This has usually meant a battery, which often relies on scarce or harmful materials. We believe that ec3 is a viable substitute, letting our buildings and infrastructure meet our energy storage needs.” The team is working toward applications like parking spaces and roads that could charge electric vehicles, as well as homes that can operate fully off the grid.
“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.”
An electron-conducting carbon concrete (ec³)-based arch structure integrates supercapacitor electrodes for dual functionality. The prototype demonstrates both structural load bearing and the ability to power an LED, with the light’s intensity varying under applied load, highlighting the potential for real-time structural health monitoring via the supercapacitor.
Arts & Culture
Steve McQueen could lecture you, but he’s got other plans
While at Harvard, Norton lecturer Steve McQueen joined undergrads at ArtsBites, a luncheon and discussion series at the Office for the Arts. Photos by Stephanie Mitchell/Harvard Staff Photographer
Eileen O’Grady
Harvard Staff Writer
October 1, 2025
4 min read
‘I think the audience needs more, and I feel I need to give mor
Steve McQueen could lecture you, but he’s got other plans
While at Harvard, Norton lecturer Steve McQueen joined undergrads at ArtsBites, a luncheon and discussion series at the Office for the Arts.
Photos by Stephanie Mitchell/Harvard Staff Photographer
Eileen O’Grady
Harvard Staff Writer
4 min read
‘I think the audience needs more, and I feel I need to give more,’ says award-winning filmmaker — presenter of this year’s Norton talks
For Steve McQueen, live performance generates a force that no podium lecture can match.
It’s a conviction that shapes the Norton Lectures he is delivering this fall, in a series titled “Pulse,” featuring film, musical performances, and dialogue.
“There’s a certain kind of energy that could be produced by a performative idea of communication, and that’s what I’m interested in,” said the Academy Award-winning director of “12 Years a Slave.” “I’m not the kind of person who stands for an hour reading from a piece of paper. I think the audience needs more, and I feel I need to give more. I also feel the dialogue — the back-and-forth clash of two stones making a fire — could be thought-provoking for the audience as well as the participants.”
McQueen is recognized internationally for producing work that explores painful and challenging histories and exposes the fragility of the human condition. He directed the feature films “Blitz” (2024) and “Hunger” (2008), as well as the documentaries “Uprising” (2021) and “Occupied City” (2023).
The first of McQueen’s six Norton Lectures, delivered Tuesday, centered on the FBI files of legendary Black singer, actor, and political activist Paul Robeson. McQueen’s 2012-2022 video work “End Credits,” features a continuous projection of digitally scanned files from thousands of highly redacted, declassified documents kept on Robeson and his wife, Eslanda Goode Robeson, for most of the singer’s life, greatly damaging his career as a performer.
McQueen with students.
The lecture featured four performers reading sections from Robeson’s FBI files as visuals from the film were projected behind them. Afterward, McQueen and Dia Art Foundation curator Donna De Salvo spoke with Henry Louis Gates Jr., Alphonse Fletcher University Professor and director of the Hutchins Center for African and African American Research. In an interview with the Gazette, McQueen said that the first lecture was intended to reflect a sense of “urgency.”
“Small Axe,” the second lecture, scheduled for Oct. 21, centers on McQueen’s film anthology by the same name. The five films depict the experiences of West Indian immigrants in London from the 1960s to the 1980s, with five unique stories rooted in the Black British experience during a period of social and political upheaval. The films are being screened at the Harvard Film Archive ahead of the lecture.
“It’s one of those situations where people come together to combat a certain kind of power,” McQueen said. “It is within this proverb: ‘If you are the big tree, we are the small axe.’ If we’re working together, we can actually get things done.”
The third lecture, “Bass,” centers on an immersive installation that McQueen created in 2024. The work, inspired in part by the Middle Passage and the trans-Atlantic journey of enslaved people, features a combination of music focused on the low-end frequency of the double bass, with colored lights.
“‘Bass’ is about a constant,” McQueen explained. “It’s about how sound is a liberation, in a way, to all of us.”
That lecture will feature a performance by bassist, singer-songwriter, and poet Meshell Ndegeocello. McQueen and De Salvo will speak with Noam M. Elcott, an art historian and faculty member at Columbia and Yale.
McQueen said he feels “very honored,” to be delivering the Norton Lectures in the series’ 100th year.
“People, when they come to a lecture or any kind of event, they bring themselves, and therefore they bring their baggage,” McQueen said. “Whatever they take out from it is what they’re dealing with individually. I’m hoping that they have that something to take away with them. That’s as much as I can hope for.”
John Bolton.Photos by Niles Singer/Harvard Staff Photographer
Nation & World
U.S. just didn’t get China, Bolton says
Asian nation now main economic, military threat to Western democracies, according to former national security adviser
Christina Pazzanese
Harvard Staff Writer
October 1, 2025
4 min read
The U.S. got it wrong on China.
China now is seen as the main threat to Western d
Asian nation now main economic, military threat to Western democracies, according to former national security adviser
Christina Pazzanese
Harvard Staff Writer
4 min read
The U.S. got it wrong on China.
China now is seen as the main threat to Western democracies over the coming decades “and I don’t think we’re prepared for it on a number of levels,” said John Bolton, former national security adviser to President Trump during his first term.
During a conversation about U.S. national security at Harvard Kennedy School on Monday evening, Bolton said the U.S. “badly misunderstood” how China’s economic growth and rising influence would affect global politics, and mistakenly believed the rise of a middle class would prompt the nation to become more democratic.
“We were wrong on both accounts,” said Bolton, who also served as an acting United Nations ambassador during the George W. Bush administration.
“China’s development of a nuclear striking capability close to or equal to Russia and the United States, I think, is the gravest threat to world peace this century.”
John Bolton
Instead, President Xi Jinping has emerged as the most powerful leader of China since Mao and China could become a nuclear peer with the U.S. and Russia by 2030, maybe sooner. The addition of a third nuclear superpower could destabilize the delicate balance achieved by the U.S. and Russia after decades of talks and agreements over arms control and deterrence.
“China’s development of a nuclear striking capability close to or equal to Russia and the United States, I think, is the gravest threat to world peace this century,” Bolton told Ned Price, an adviser to former Secretary of State Antony Blinken and State Department spokesperson during the Biden administration. Price is currently a fall fellow at the Institute of Politics.
The U.S. should be very concerned about the emerging partnership between Russia, China, North Korea, and Iran, especially a China-Russia “axis” where China is the dominant partner over Russia, Bolton said.
Despite its denials, China has aided Russia in the war in Ukraine by buying more Russian oil, helping launder sanctioned Russian financial assets, and providing weapons in parts that can then be reassembled, Bolton said.
By the end of this century, he predicted, Eastern Russia will likely become Chinese territory.
Unfortunately, the top priority for the U.S. during Trump’s first term was to secure a big trade deal with China, and others, like Japan, rather than broader considerations such as national security or human rights.
“The bigger strategic picture was lost,” Bolton said.
“The bigger strategic picture was lost.”
John Bolton
Bolton was highly skeptical that there will ever be a Palestinian state, and drily mocked Trump’s declaration during a White House press event earlier in the day of a Gaza peace plan to end the nearly two-year war between Israel and Hamas and his appointment of former U.K. Prime Minister Tony Blair to oversee post-war Gaza.
“I don’t think what was announced today is going to happen,” because Hamas and Iran are unlikely to agree to the terms outlined by the White House, said Bolton, who rebuked the United Nations Relief and Works Agency (UNRWA) for having “mistreated” Palestinian refugees for decades and called for the agency’s elimination.
Though still a critic of the U.N. and other “soft power” efforts like USAID, Bolton took aim at recent cuts to Voice of America, Radio Free Europe, and other U.S. government-funded news outlets. Many around the world relied on them for news about the U.S., “and now they’re gone. That is a huge vacuum that we’ve created for our adversaries to fill,” he said. “It was a huge mistake.”
Bolton pushed back on detractors who say regime change is his default foreign policy strategy, saying there are only two viable options to deal with adversaries or rogue nations.
“You either see if you can change their behavior or if you can’t change their behavior, change the regime,” he said.
Health
Smart patch reduces cravings for alcohol and drugs
Researchers encouraged by results in ‘immensely challenging’ first year of recovery
Mass General Brigham Communications
October 1, 2025
3 min read
A new study by investigators from Mass General Brigham and Harvard shows that a non-drug, wearable device can help people with substance use disorders manage stress, reduce cravings, and lower th
Smart patch reduces cravings for alcohol and drugs
Researchers encouraged by results in ‘immensely challenging’ first year of recovery
Mass General Brigham Communications
3 min read
A new study by investigators from Mass General Brigham and Harvard shows that a non-drug, wearable device can help people with substance use disorders manage stress, reduce cravings, and lower their risk of relapse in real time. The results are published in JAMA Psychiatry.
“One of the hallmarks of early addiction recovery is poor self-awareness of emotional states,” said corresponding author David Eddie, a Harvard-Mass General psychologist at the Recovery Research Institute at Massachusetts General Hospital. “People in recovery can experience a lot of stress, but they often don’t have great awareness of it or proactively manage it.”
For people in early recovery, stress often triggers cravings, and the struggle to resist those urges can create even more stress. Together, cravings and stress can lead to relapse. Stress and craving also tend to be associated with lower heart rate variability (HRV) — the natural variations in time between heartbeats, which reflects underlying health as well as how the body adapts to stress.
Special breathing exercises can raise heart rate variability and help regulate mood and improve cognitive control. Newer biofeedback devices can detect low heart rate variability and provide visual or auditory cues to guide breathing adjustments. Eddie’s previous studies have found that biofeedback can reduce craving and anxiety in people with substance abuse disorder.
64%Of participants less likely to use substances when wearing device
In the study, supported by the National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism, researchers tested whether a heart rate variability biofeedback device could support substance abuse recovery by conducting a phase 2 clinical trial of 115 adults with severe substance abuse disorder in their first year of recovery. Half the participants got a biofeedback smart patch device (the Lief HRVB Smart Patch), and the other half followed the recovery plan they had in place, such as recovery meetings, psychotherapy, or medicines. Over eight weeks, participants reported their mood, cravings, and any substance use twice a day with their smartphone.
“The latest HRV biofeedback devices can detect when people are stressed or experiencing cravings, and, using AI, prompt them to do a brief burst of biofeedback,” Eddie said. “This allows people to get out in front of risk.”
Participants were asked to do at least 10 minutes of scheduled practice a day and at least five minutes of prompted practice. The participants who got a biofeedback device had less negative emotions, reported fewer cravings for alcohol or drugs, and were 64 percent less likely to use substances on any given day, suggesting that the intervention interfered with the cycle of craving and substance use.
The study focused only on people in the first year of an abstinence-based recovery attempt, and future studies are needed to determine if the intervention has sustained benefits.
“The first year of recovery is immensely challenging,” said Eddie. “Our goal is to find tools that not only bridge people during that first year, but also help them manage their stress for the rest of their life.”
Palladium is one of the keys to jump-starting a hydrogen-based energy economy. The silvery metal is a natural gatekeeper against every gas except hydrogen, which it readily lets through. For its exceptional selectivity, palladium is considered one of the most effective materials at filtering gas mixtures to produce pure hydrogen.Today, palladium-based membranes are used at commercial scale to provide pure hydrogen for semiconductor manufacturing, food processing, and fertilizer production, among
Palladium is one of the keys to jump-starting a hydrogen-based energy economy. The silvery metal is a natural gatekeeper against every gas except hydrogen, which it readily lets through. For its exceptional selectivity, palladium is considered one of the most effective materials at filtering gas mixtures to produce pure hydrogen.
Today, palladium-based membranes are used at commercial scale to provide pure hydrogen for semiconductor manufacturing, food processing, and fertilizer production, among other applications in which the membranes operate at modest temperatures. If palladium membranes get much hotter than around 800 kelvins, they can break down.
Now, MIT engineers have developed a new palladium membrane that remains resilient at much higher temperatures. Rather than being made as a continuous film, as most membranes are, the new design is made from palladium that is deposited as “plugs” into the pores of an underlying supporting material. At high temperatures, the snug-fitting plugs remain stable and continue separating out hydrogen, rather than degrading as a surface film would.
The thermally stable design opens opportunities for membranes to be used in hydrogen-fuel-generating technologies such as compact steam methane reforming and ammonia cracking — technologies that are designed to operate at much higher temperatures to produce hydrogen for zero-carbon-emitting fuel and electricity.
“With further work on scaling and validating performance under realistic industrial feeds, the design could represent a promising route toward practical membranes for high-temperature hydrogen production,” says Lohyun Kim PhD ’24, a former graduate student in MIT’s Department of Mechanical Engineering.
Kim and his colleagues report details of the new membrane in a study appearing today in the journal Advanced Functional Materials. The study’s co-authors are Randall Field, director of research at the MIT Energy Initiative (MITEI); former MIT chemical engineering graduate student Chun Man Chow PhD ’23; Rohit Karnik, the Jameel Professor in the Department of Mechanical Engineering at MIT and the director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS); and Aaron Persad, a former MIT research scientist in mechanical engineering who is now an assistant professor at the University of Maryland Eastern Shore.
Compact future
The team’s new design came out of a MITEI project related to fusion energy. Future fusion power plants, such as the one MIT spinout Commonwealth Fusion Systems is designing, will involve circulating hydrogen isotopes of deuterium and tritium at extremely high temperatures to produce energy from the isotopes’ fusing. The reactions inevitably produce other gases that will have to be separated, and the hydrogen isotopes will be recirculated into the main reactor for further fusion.
Similar issues arise in a number of other processes for producing hydrogen, where gases must be separated and recirculated back into a reactor. Concepts for such recirculating systems would require first cooling down the gas before it can pass through hydrogen-separating membranes — an expensive and energy-intensive step that would involve additional machinery and hardware.
“One of the questions we were thinking about is: Can we develop membranes which could be as close to the reactor as possible, and operate at higher temperatures, so we don’t have to pull out the gas and cool it down first?” Karnik says. “It would enable more energy-efficient, and therefore cheaper and compact, fusion systems.”
The researchers looked for ways to improve the temperature resistance of palladium membranes. Palladium is the most effective metal used today to separate hydrogen from a variety of gas mixtures. It naturally attracts hydrogen molecules (H2) to its surface, where the metal’s electrons interact with and weaken the molecule’s bonds, causing H2 to temporarily break apart into its respective atoms. The individual atoms then diffuse through the metal and join back up on the other side as pure hydrogen.
Palladium is highly effective at permeating hydrogen, and only hydrogen, from streams of various gases. But conventional membranes typically can operate at temperatures of up to 800 kelvins before the film starts to form holes or clumps up into droplets, allowing other gases to flow through.
Plugging in
Karnik, Kim and their colleagues took a different design approach. They observed that at high temperatures, palladium will start to shrink up. In engineering terms, the material is acting to reduce surface energy. To do this, palladium, and most other materials and even water, will pull apart and form droplets with the smallest surface energy. The lower the surface energy, the more stable the material can be against further heating.
This gave the team an idea: If a supporting material’s pores could be “plugged” with deposits of palladium — essentially already forming a droplet with the lowest surface energy — the tight quarters might substantially increase palladium’s heat tolerance while preserving the membrane’s selectivity for hydrogen.
To test this idea, they fabricated small chip-sized samples of membrane using a porous silica supporting layer (each pore measuring about half a micron wide), onto which they deposited a very thin layer of palladium. They applied techniques to essentially grow the palladium into the pores, and polished down the surface to remove the palladium layer and leave palladium only inside the pores.
They then placed samples in a custom-built apparatus in which they flowed hydrogen-containing gas of various mixtures and temperatures to test its separation performance. The membranes remained stable and continued to separate hydrogen from other gases even after experiencing temperatures of up to 1,000 kelvins for over 100 hours — a significant improvement over conventional film-based membranes.
“The use of palladium film membranes are generally limited to below around 800 kelvins, at which point they degrade,” Kim says. “Our plug design therefore extends palladium’s effective heat resilience by roughly at least 200 kelvins and maintains integrity far longer under extreme conditions.”
These conditions are within the range of hydrogen-generating technologies such as steam methane reforming and ammonia cracking.
Steam methane reforming is an established process that has required complex, energy-intensive systems to preprocess methane to a form where pure hydrogen can be extracted. Such preprocessing steps could be replaced with a compact “membrane reactor,” through which a methane gas would directly flow, and the membrane inside would filter out pure hydrogen. Such reactors would significantly cut down the size, complexity, and cost of producing hydrogen from steam methane reforming, and Kim estimates a membrane would have to work reliably in temperatures of up to nearly 1,000 kelvins. The team’s new membrane could work well within such conditions.
Ammonia cracking is another way to produce hydrogen, by “cracking” or breaking apart ammonia. As ammonia is very stable in liquid form, scientists envision that it could be used as a carrier for hydrogen and be safely transported to a hydrogen fuel station, where ammonia could be fed into a membrane reactor that again pulls out hydrogen and pumps it directly into a fuel cell vehicle. Ammonia cracking is still largely in pilot and demonstration stages, and Kim says any membrane in an ammonia cracking reactor would likely operate at temperatures of around 800 kelvins — within the range of the group’s new plug-based design.
Karnik emphasizes that their results are just a start. Adopting the membrane into working reactors will require further development and testing to ensure it remains reliable over much longer periods of time.
“We showed that instead of making a film, if you make discretized nanostructures you can get much more thermally stable membranes,” Karnik says. “It provides a pathway for designing membranes for extreme temperatures, with the added possibility of using smaller amounts of expensive palladium, toward making hydrogen production more efficient and affordable. There is potential there.”
This work was supported by Eni S.p.A. via the MIT Energy Initiative.
This work utilized facilities at the MIT Materials Research Laboratory (MRL), the MIT Laboratory for Manufacturing and Productivity (LMP), and MIT.nano.
Health
Reeling in a big scientific discovery
William Kaelin pursued Nobel-winning findings using a fisherman’s instinct
Sy Boles
Harvard Staff Writer
October 1, 2025
6 min read
Veasey Conway/Harvard Staff Photographer
Part of the
Profiles of Progress
series
Scientific discovery, according to William Kaelin, is a little bit like fishing: You can
Scientific discovery, according to William Kaelin, is a little bit like fishing: You can be taught how to bait a hook or cast a line, but there is an art to knowing where to look for the big one.
Over the course of decades, Kaelin meticulously discovered a fundamental physiological mechanism: the way that cells sense and respond to oxygen levels. The work led to novel treatments for kidney cancer, and in 2019 it earned him a joint Nobel Prize in physiology or medicine, along with Peter Ratcliffe and Gregg Semenza.
Kaelin says the groundbreaking research, which also has implications for the treatment of conditions such as anemia and heart attacks, was based on looking in the right place.
“A lot of science is just seeing connections and possibilities,” said Kaelin, the Sidney Farber Professor of Medicine at Dana-Farber Cancer Institute and Harvard Medical School. “I used to think it was mostly about mastering fancy techniques, but that is really of secondary importance. It’s really picking a good question to work on, and seeing a possible connection that other people hadn’t seen.”
Kaelin, who was born in 1957 and grew up fishing with his dad on the south shore of Long Island, recalls his parents supplying him with chemistry kits, construction toys, and a microscope to foster an interest in the sciences. “We were in the midst of the Cold War and the space race,” he said. “Scientists and engineers were celebrated.”
“A lot of science is seeing connections and being primed to recognize a possibility. But to get to that point, you have to invest in educating people.”
He was drawn to mathematics, where problems have one correct answer, and computer science, where a simple message to the mainframe leads to a clear result. At Duke University, he pursued a pre-med degree and went on to medical school. It was during his third year, while he was working in a lab studying blood flow to tumors, that he made the first observation that would send him down his Nobel-winning path. “I started reading about this unusual disease called von Hippel-Lindau disease,” he said.
Patients with von Hippel-Lindau disease, or VHL, develop tumors in multiple organs. The tumors, Kaelin learned, somehow stimulate the excess formation of new blood vessels, a process called angiogenesis.
Years later, when he was chief medical resident at Johns Hopkins, VHL showed up again in a different body of literature, listed as a cause of excess red blood cell production.
He remembers thinking at the time: “Here’s von Hippel-Lindau disease-related tumors. What are they doing on this list?”
He was learning to think like a scientist.
When Kaelin launched his own lab at Dana-Farber, he returned to the lingering puzzle. His working hypothesis: Since increased angiogenesis and increased red blood cell production are two ways that tissues try to deal with low oxygen, perhaps the VHL gene was required for cells to sense oxygen properly. He reasoned that studying the VHL gene could teach him about angiogenesis, about oxygen sensing, and even about a common cancer, namely kidney cancer. That’s because even non-hereditary kidney cancers usually have acquired VHL mutations at some point in the patient’s lifetime, in contrast to VHL disease, where a mutation is inherited.
“I was the product of bipartisan support for science and engineering, not just in terms of funding, but also messaging.”
There was particular interest around angiogenesis when Kaelin started his laboratory because of the pioneering work of Harvard professor Judah Folkman, who championed the idea of treating cancers with angiogenesis inhibitors.
“If we were going to have angiogenesis inhibitors, we were really going to need to understand the molecular circuitry that controls angiogenesis,” Kaelin said. “Seemingly, the VHL gene and its protein product must play some role in this, because if it’s defective, you make too many blood vessels.”
It was known that VHL gene mutations caused VHL disease, but the question was how. Like most genes, the VHL gene contains the instructions for a protein, in this case called the VHL protein. Kaelin’s research — much of it supported by federal funding — confirmed the hypothesis that the VHL protein is required for oxygen sensing. Together with others in the field, his work showed that the protein binds to a protein called HIF-alpha and targets it for destruction, unless oxygen is scarce. In other words, HIF-alpha is the master regulator of the cell’s response to low oxygen.
In healthy cells, VHL keeps HIF-alpha in check. But when the VHL gene is mutated, as in VHL-associated tumors, HIF-1-alpha accumulates, aberrantly triggering the overproduction of red blood cells and abnormal blood vessel growth — the hallmark of VHL disease and of many cancers.
The finding explained many of the clinical characteristics of VHL-associated tumors, but it still begged the question of how the VHL protein “knows” whether oxygen is present, and hence whether to target HIF-alpha for destruction. Kaelin and his co-Nobelist Ratcliffe, working independently, showed that a little chemical “flag” is added to the HIF-alpha protein when oxygen is present, which signals the VHL protein to degrade the HIF-alpha.
The mechanism is elegant in its simplicity, a basic balancing of elements in the body that was not understood until the right person with the right training asked the right question. Kaelin says it’s gratifying that the research led to the development of drugs that target the oxygen-sensing process, leading to new treatments for cancer and for anemia caused by kidney failure.
“A lot of science is seeing connections and being primed to recognize a possibility,” he said. “But to get to that point, you have to invest in educating people, training people, exposing them to different ways of thinking, exposing them to what’s been done before them.”
Kaelin worries that the esteem for science that sent him on his path might not support the next generation. Although his lab thus far has not been affected by the federal government’s cancellation of some $2.2 billion in research funding to Harvard, he has been devastated to see the impact on his colleagues. (A U.S. District Court in September ruled the government acted unlawfully when it cut grants, and previously frozen research dollars have started flowing to researchers again.)
“I was the product of bipartisan support for science and engineering, not just in terms of funding, but also messaging — again, treating scientists and engineers like heroes,” he said. “It set up a virtuous cycle, because we were attracting talent and investing money, so we were doing great science. This was perceived as a place to do great science, which meant we attracted more science and more capital. Now it seems like we’re doing all the things that you would do to try to undo that.”
The Action Research Collaborative, housed in the Bronfenbrenner Center for Translational Research, is partnering with a New York state agency to strengthen early childhood care and education across the state.
The Action Research Collaborative, housed in the Bronfenbrenner Center for Translational Research, is partnering with a New York state agency to strengthen early childhood care and education across the state.
A diet rich in the amino acid cysteine may have rejuvenating effects in the small intestine, according to a new study from MIT. This amino acid, the researchers discovered, can turn on an immune signaling pathway that helps stem cells to regrow new intestinal tissue.This enhanced regeneration may help to heal injuries from radiation, which often occur in patients undergoing radiation therapy for cancer. The research was conducted in mice, but if future research shows similar results in humans, t
A diet rich in the amino acid cysteine may have rejuvenating effects in the small intestine, according to a new study from MIT. This amino acid, the researchers discovered, can turn on an immune signaling pathway that helps stem cells to regrow new intestinal tissue.
This enhanced regeneration may help to heal injuries from radiation, which often occur in patients undergoing radiation therapy for cancer. The research was conducted in mice, but if future research shows similar results in humans, then delivering elevated quantities of cysteine, through diet or supplements, could offer a new strategy to help damaged tissue heal faster, the researchers say.
“The study suggests that if we give these patients a cysteine-rich diet or cysteine supplementation, perhaps we can dampen some of the chemotherapy or radiation-induced injury,” says Omer Yilmaz, director of the MIT Stem Cell Initiative, an associate professor of biology at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research. “The beauty here is we’re not using a synthetic molecule; we’re exploiting a natural dietary compound.”
While previous research has shown that certain types of diets, including low-calorie diets, can enhance intestinal stem cell activity, the new study is the first to identify a single nutrient that can help intestinal cells to regenerate.
Yilmaz is the senior author of the study, which appears today in Nature. Koch Institute postdoc Fangtao Chi is the paper’s lead author.
Boosting regeneration
It is well-established that diet can affect overall health: High-fat diets can lead to obesity, diabetes, and other health problems, while low-calorie diets have been shown to extend lifespans in many species. In recent years, Yilmaz’s lab has investigated how different types of diets influence stem cell regeneration, and found that high-fat diets, as well as short periods of fasting, can enhance stem cell activity in different ways.
“We know that macro diets such as high-sugar diets, high-fat diets, and low-calorie diets have a clear impact on health. But at the granular level, we know much less about how individual nutrients impact stem cell fate decisions, as well as tissue function and overall tissue health,” Yilmaz says.
In their new study, the researchers began by feeding mice a diet high in one of 20 different amino acids, the building blocks of proteins. For each group, they measured how the diet affected intestinal stem cell regeneration. Among these amino acids, cysteine had the most dramatic effects on stem cells and progenitor cells (immature cells that differentiate into adult intestinal cells).
Further studies revealed that cysteine initiates a chain of events leading to the activation of a population of immune cells called CD8 T cells. When cells in the lining of the intestine absorb cysteine from digested food, they convert it into CoA, a cofactor that is released into the mucosal lining of the intestine. There, CD8 T cells absorb CoA, which stimulates them to begin proliferating and producing a cytokine called IL-22.
IL-22 is an important player in the regulation of intestinal stem cell regeneration, but until now, it wasn’t known that CD8 T cells can produce it to boost intestinal stem cells. Once activated, those IL-22-releasing T cells are primed to help combat any kind of injury that could occur within the intestinal lining.
“What’s really exciting here is that feeding mice a cysteine-rich diet leads to the expansion of an immune cell population that we typically don’t associate with IL-22 production and the regulation of intestinal stemness,” Yilmaz says. “What happens in a cysteine-rich diet is that the pool of cells that make IL-22 increases, particularly the CD8 T-cell fraction.”
These T cells tend to congregate within the lining of the intestine, so they are already in position when needed. The researchers found that the stimulation of CD8 T cells occurred primarily in the small intestine, not in any other part of the digestive tract, which they believe is because most of the protein that we consume is absorbed by the small intestine.
Healing the intestine
In this study, the researchers showed that regeneration stimulated by a cysteine-rich diet could help to repair radiation damage to the intestinal lining. Also, in work that has not been published yet, they showed that a high-cysteine diet had a regenerative effect following treatment with a chemotherapy drug called 5-fluorouracil. This drug, which is used to treat colon and pancreatic cancers, can also damage the intestinal lining.
Cysteine is found in many high-protein foods, including meat, dairy products, legumes, and nuts. The body can also synthesize its own cysteine, by converting the amino acid methionine to cysteine — a process that takes place in the liver. However, cysteine produced in the liver is distributed through the entire body and doesn’t lead to a buildup in the small intestine the way that consuming cysteine in the diet does.
“With our high-cysteine diet, the gut is the first place that sees a high amount of cysteine,” Chi says.
Cysteine has been previously shown to have antioxidant effects, which are also beneficial, but this study is the first to demonstrate its effect on intestinal stem cell regeneration. The researchers now hope to study whether it may also help other types of stem cells regenerate new tissues. In one ongoing study, they are investigating whether cysteine might stimulate hair follicle regeneration.
They also plan to further investigate some of the other amino acids that appear to influence stem cell regeneration.
“I think we’re going to uncover multiple new mechanisms for how these amino acids regulate cell fate decisions and gut health in the small intestine and colon,” Yilmaz says.
The research was funded, in part, by the National Institutes of Health, the V Foundation, the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund, the Bridge Project — a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center, the American Federation for Aging Research, the MIT Stem Cell Initiative, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Cysteine is found in many high-protein foods, including meat, dairy products, legumes, and nuts. A diet rich in cysteine has rejuvenating effects in the small intestine, according to a new study.
An international study led by researchers at the University of Cambridge has discovered that autism diagnosed in early childhood has a different genetic and developmental profile to autism diagnosed from late childhood onwards.
The scientists say that the findings challenge the long-held assumption that autism is a single condition with a unified underlying cause.
Published in Nature, the study analysed behavioural data across childhood and adolescence from the UK and Australia, and genetic da
An international study led by researchers at the University of Cambridge has discovered that autism diagnosed in early childhood has a different genetic and developmental profile to autism diagnosed from late childhood onwards.
The scientists say that the findings challenge the long-held assumption that autism is a single condition with a unified underlying cause.
Published in Nature, the study analysed behavioural data across childhood and adolescence from the UK and Australia, and genetic data from over 45,000 autistic individuals across several large cohorts in Europe and the US.
Scientists from Cambridge’s Department of Psychiatry found that children diagnosed as autistic earlier in life (typically before six years old) were more likely to show behavioural difficulties from early childhood, such as problems with social interaction.
However, those diagnosed with autism later on in life (in late childhood or beyond) were more likely to experience social and behavioural difficulties during adolescence. They also had an increased likelihood of mental health conditions such as depression.
The team then linked the genetic data to the age at diagnosis among autistic people. They found that the underlying genetic profiles differed between those diagnosed with autism earlier and later in life, with only a modest overlap.
In fact, the average genetic profile of later-diagnosed autism is closer to that of ADHD, as well as to mental health conditions like depression and PTSD, than it is to autism diagnosed in early childhood.
The study’s authors point out that a lack of support in early childhood will also play a role in increased risk of mental health issues in the later-diagnosed group, for example by being more vulnerable to bullying pre-diagnosis.
Nevertheless, scientists say that the stronger genetic overlap between later-diagnosed autism and certain psychiatric disorders suggests there may be some genetic factors that partly increase the risk of mental health conditions among those diagnosed with autism later in life.
“We found that, on average, individuals diagnosed with autism earlier and later in life follow different developmental pathways, and surprisingly have different underlying genetic profiles,” said lead author Xinhe Zhang from the University of Cambridge.
“Our findings suggest that the timing of autism diagnosis reflects more than just differences in access to healthcare or awareness, important as these are. However, it is important to note that these are average differences on a gradient, so earlier and later diagnosed autism are not valid diagnostic terms.”
The study looked at “polygenic” factors: sets of thousands of genetic variants that can collectively shape particular traits. The team found that commonly heritable polygenic factors explain around 11% of the variation in age at autism diagnosis.
“The term ‘autism’ likely describes multiple conditions,” said senior author Dr Varun Warrier from Cambridge’s Department of Psychiatry. “For the first time, we have found that earlier and later diagnosed autism have different underlying biological and developmental profiles.”
“An important next step will be to understand the complex interaction between genetics and social factors that lead to poorer mental health outcomes among later-diagnosed autistic individuals.”
This study has implications for how autism is conceptualised, studied, and supported, say the research team. It suggests that genetic and developmental variation contributes to when and how autistic traits manifest, and why some individuals are diagnosed only later in life.
“Some of the genetic influences predispose people to show autism traits from a very young age that may be more easily identified, leading to an earlier diagnosis,” added Warrier. “For others, genetic influences may alter which autism features emerge and when. Some of these children may have features that are not picked up by parents or caregivers until they cause significant distress in late childhood or adolescence.”
“Understanding how the features of autism emerge not just in early childhood but later in childhood and adolescence could help us recognise, diagnose, and support autistic people of all ages.”
Researchers find different genetic profiles related to two trajectories that autistic children tend to follow. One linked to early diagnosis, and communication difficulties in infancy. The other linked to later diagnosis, increased social and behavioural difficulties in adolescence, and higher rates of conditions like ADHD, depression, and PTSD.
The term ‘autism’ likely describes multiple conditions
Say a local concert venue wants to engage its community by giving social media followers an easy way to share and comment on new music from emerging artists. Rather than working within the constraints of existing social platforms, the venue might want to create its own social app with the functionality that would be best for its community. But building a new social app from scratch involves many complicated programming steps, and even if the venue can create a customized app, the organization’s
Say a local concert venue wants to engage its community by giving social media followers an easy way to share and comment on new music from emerging artists. Rather than working within the constraints of existing social platforms, the venue might want to create its own social app with the functionality that would be best for its community. But building a new social app from scratch involves many complicated programming steps, and even if the venue can create a customized app, the organization’s followers may be unwilling to join the new platform because it could mean leaving their connections and data behind.
Now, researchers from MIT have launched a framework called Graffiti that makes building personalized social applications easier, while allowing users to migrate between multiple applications without losing their friends or data.
“We want to empower people to have control over their own designs rather than having them dictated from the top down,” says electrical engineering and computer science graduate student Theia Henderson.
Henderson and her colleagues designed Graffiti with a flexible structure so individuals have the freedom to create a variety of customized applications, from messenger apps like WhatsApp to microblogging platforms like X to location-based social networking sites like Nextdoor, all using only front-end development tools like HTML.
The protocol ensures all applications can interoperate, so content posted on one application can appear on any other application, even those with disparate designs or functionality. Importantly, Graffiti users retain control of their data, which is stored on a decentralized infrastructure rather than being held by a specific application.
While the pros and cons of implementing Graffiti at scale remain to be fully explored, the researchers hope this new approach can someday lead to healthier online interactions.
“We’ve shown that you can have a rich social ecosystem where everyone owns their own data and can use whatever applications they want to interact with whoever they want in whatever way they want. And they can have their own experiences without losing connection with the people they want to stay connected with,” says David Karger, professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Henderson, the lead author, and Karger are joined by MIT Research Scientist David D. Clark on a paper about Graffiti, which will be presented at the ACM Symposium on User Interface Software and Technology.
Personalized, integrated applications
With Graffiti, the researchers had two main goals: to lower the barrier to creating personalized social applications and to enable those personalized applications to interoperate without requiring permission from developers.
To make the design process easier, they built a collective back-end infrastructure that all applications access to store and share content. This means developers don’t need to write any complex server code. Instead, designing a Graffiti application is more like making a website using popular tools like Vue.
Developers can also easily introduce new features and new types of content, giving them more freedom and fostering creativity.
“Graffiti is so straightforward that we used it as the infrastructure for the intro to web design class I teach, and students were able to write the front-end very easily to come up with all sorts of applications,” Karger says.
The open, interoperable nature of Graffiti means no one entity has the power to set a moderation policy for the entire platform. Instead, multiple competing and contradictory moderation services can operate, and people can choose the ones they like.
Graffiti uses the idea of “total reification,” where every action taken in Graffiti, such as liking, sharing, or blocking a post, is represented and stored as its own piece of data. A user can configure their social application to interpret or ignore those data using its own rules.
For instance, if an application is designed so a certain user is a moderator, posts blocked by that user won’t appear in the application. But for an application with different rules where that person isn’t considered a moderator, other users might just see a warning or no flag at all.
“Theia’s system lets each person pick their own moderators, avoiding the one-sized-fits-all approach to moderation taken by the major social platforms,” Karger says.
But at the same time, having no central moderator means there is no one to remove content from the platform that might be offensive or illegal.
“We need to do more research to understand if that is going to provide real, damaging consequences or if the kind of personal moderation we created can provide the protections people need,” he adds.
Empowering social media users
The researchers also had to overcome a problem known as context collapse, which conflicts with their goal of interoperation.
For instance, context collapse would occur if a person’s Tinder profile appeared on LinkedIn, or if a post intended for one group, like close friends, would create conflict with another group, such as family members. Context collapse can lead to anxiety and have social repercussions for the user and their different communities.
“We realize that interoperability can sometimes be a bad thing. People have boundaries between different social contexts, and we didn’t want to violate those,” Henderson says.
To avoid context collapse, the researchers designed Graffiti so all content is organized into distinct channels. Channels are flexible and can represent a variety of contexts, such as people, applications, locations, etc.
If a user’s post appears in an application channel but not their personal channel, others using that application will see the post, but those who only follow this user will not.
“Individuals should have the power to choose the audience for whatever they want to say,” Karger adds.
The researchers created multiple Graffiti applications to showcase personalization and interoperability, including a community-specific application for a local concert venue, a text-centric microblogging platform patterned off X, a Wikipedia-like application that enables collective editing, and a real-time messaging app with multiple moderation schemes patterned off WhatsApp and Slack.
“It also leaves room to create so many social applications people haven’t thought of yet. I’m really excited to see what people come up with when they are given full creative freedom,” Henderson says.
In the future, she and her colleagues want to explore additional social applications they could build with Graffiti. They also intend to incorporate tools like graphical editors to simplify the design process. In addition, they want to strengthen Graffiti’s security and privacy.
And while there is still a long way to go before Graffiti could be implemented at scale, the researchers are currently running a user study as they explore the potential positive and negative impacts the system could have on the social media landscape.
MIT researchers developed a new system that enables individuals to more easily create customized social applications that can seamlessly interoperate with one another.
Full speech transcript
Good morning. A very warm welcome to the new academic year and special congratulations to the proctors for this year, newly elected.
It is conventional, at the opening of this address, for the Vice Chancellor to run through all the brilliant things our academics and students have done in the past year – the discoveries, awards won, grants received, books published, degrees with distinction, sporting victories, careers launched, and other headline-grabbing milestones in r
Good morning. A very warm welcome to the new academic year and special congratulations to the proctors for this year, newly elected.
It is conventional, at the opening of this address, for the Vice Chancellor to run through all the brilliant things our academics and students have done in the past year – the discoveries, awards won, grants received, books published, degrees with distinction, sporting victories, careers launched, and other headline-grabbing milestones in research and education that define a year at Cambridge. It is conventional, and it is tempting, for these are the things that Cambridge is known for, that make us all feel good, and that give the work of this University meaning. They are the things that get me out of bed in the morning, to be honest, and that give me joy.
But I am going to resist temptation. Instead, I am going to talk about what the University is doing to enable the flow of these brilliant things to continue unabated into the foreseeable future and to have the impact on society and the world that they need to have.
As I was preparing to write this address, I went back and read the addresses of my predecessors, back to 2003 when we started putting them up on the website. They are a fascinating read – I commend them to you if you have a few hours on your hands. Taken together, they offer an in-time narrative of the history of the University in the 21st century so far. Three elements emerge clearly in that narrative.
One element is Cambridge’s enduring features. These include the markers of success – the brilliant people doing brilliant things that I mentioned earlier. They include enduring questions: What should be the size and shape of the Collegiate University? How should research and teaching responsibilities be allocated? They include enduring challenges: widening participation, for example, and interdisciplinarity. And central to the narrative are enduring relationships: between the University and the Colleges, with government, with alumni. These topics come up again and again, reflecting their centrality in the University’s mission.
A second element of the narrative is the broader context in which the Collegiate University operates. This includes everything from government policies and regulations related to higher education to the state of the regional and national economy, broader government strategies, partnerships, competitive pressures, world-changing events like Brexit and the pandemic, you name it – a range of factors, external to the University, that impinge on its ability to deliver on its mission. These factors hum along in the background of the narrative, occasionally mentioned but always there, setting the tone and defining the terms in which the University operates. Over the past 22 years, they have been responsible for much of the drama and suspense in the University’s story, including a few plot-twists and cliff-hangers.
The interaction of the University and the context gives rise to the third element of the narrative, which is the major initiatives that have defined this period in the University’s history. These include the growth of philanthropy and alumni engagement in both the Colleges and the University, the development of West and Northwest Cambridge, the continued expansion of the Biomedical Campus, and the Student Support Initiative. These initiatives feature prominently in the narrative all the way back to 2003; they exist at the intersection of mission, opportunity, and necessity, and are building blocks of our story going forward.
The intersection of mission, opportunity, and necessity is a very special place at the University of Cambridge, fertile ground for creation, invention, and innovation of all sorts. When I am asked what has enabled Cambridge to develop such a vibrant innovation ecosystem, I point to this intersection: We bring brilliant, motivated people together in a space of great intellectual riches, give them little structure, a modicum of resources, and let them run free. They find their way to what’s interesting, what’s promising, and what’s lucrative. It’s not that simple, of course, but the core of that description is right.
Most of the University’s major investments in recent years have been in the service of that model. The Cambridge Biomedical Campus started out as nothing more than a greenfield site for a new Addenbrooke’s Hospital, a new Laboratory of Molecular Biology, and the University’s Department of Radiotherapeutics. But over the last 20 years, with vision, effort, and a huge amount of investment, it has developed into a thriving hub for research and innovation in the life sciences, bringing together the University, the NHS, and industry on a single site.
The West Cambridge site, likewise, started out as a place to put new departments that had nowhere else to go. The first out there was the Department of Veterinary Medicine in 1955. Then came the Whittle Lab in 1973. Then the British Antarctic Survey followed closely by Cavendish II. There wasn’t much of a plan until about 20 years ago, when a critical mass of University activity had located there and the opportunity to develop the site more strategically became clear. West Cambridge as an innovation district came into view only recently with the opening of the West Hub in 2022.
The Eddington site, in Northwest Cambridge, followed a different trajectory. Here, the plan preceded the development, and building started with a hub – a University primary school that is now proudly celebrating its tenth anniversary. So Eddington skipped the dumping-ground phase and began as a strategic development, designed to support and expand the population of post-grads, post-docs, and other staff to power research and innovation at the University.
The use of placemaking to support knowledge creation and innovation is a recent development in the University but, in fact, Cambridge has a long history of creating environments that bring people together to fulfil its academic mission. The Colleges were the pioneers here, and it is perhaps because they did this so well that the University did not need to. University buildings were originally designed to house research materials, equipment, and staff. Placemaking came later, starting with the first Cavendish Laboratory, and then its spin-out, the LMB. Now, we recognise the importance of placemaking, not as an end in itself but as a means to enabling our talented staff and students to do their best work.
Moving forward, we are deploying placemaking in multiple sites to take our research and innovation ecosystem to the next level of maturity. On the biomedical campus, we are partnering on plans for a Cancer Research Hospital that will close the distance between bench and bedside, delivering innovative solutions that will transform the lives of cancer patients. We are partners in a new children’s hospital that will integrate mental and physical health, translating research on prevention and the early diagnosis of disease into children’s care. Each of these hospitals is so much more than another big building; they will be brilliant, innovative spaces bringing people together to pioneer new ways of improving health. We are actively working with our NHS partners to turn our shared visions for these hospitals into reality. Enabling work has been undertaken, and we are making good progress with philanthropic fundraising.
At West Cambridge, we are planning the next phase of development, this time with a vision: A vision, 20 years in the making, to combine all of Engineering once again on one site. A vision in which academic departments co-locate with industry partners, national research institutes, scaling companies, and investors to create the leading location in Europe for AI, quantum, and climate research. A vision of a scale of research activity that will attract leading companies and research talent to the UK, on a site designed to best-in-class environmental specifications and with attention to outdoor as well as indoor spaces. This vision builds on what is already in place at West Cambridge, but recasts, redevelops, and expands on it in a more purposeful and strategic way, so that the people we want to attract and bring together – the researchers, teachers, and innovators from academia, industry, and the start-up world – will all be able to see themselves there.
In the centre of Cambridge, we are building an Innovation Hub that will bring together spinouts, startups, scaling companies, corporate innovation teams, venture capitalists, entrepreneurs, and our world-class research community, all in a facility located at 1-3 Hills Road, just a short walk from the train station.
The vision for this hub, and indeed for much of our expanded innovation ecosystem, is borrowed shamelessly from the development of Kendall Square in Cambridge, Massachusetts. A group of us went to see key people and the physical infrastructure they created in Kendall Square, and we were struck by the lessons for the development of our Cambridge. About 15 years ago, faced with competition from Silicon Valley and other innovation districts, stakeholders at MIT and Harvard took a couple of important decisions: first of all to cooperate rather than compete, and second, to invest on a large scale in lab space, incubator space, equipment, hubs – the infrastructure needed to support deep-tech and life-science start-ups. The result was a step-change in their translation capabilities, their social and economic impact, and critically, their ability to attract and retain top academic talent at all levels. We can do the same.
Of course, we, the University of Cambridge, cannot do all this alone. That’s why we have been working with central and local government, major employers across the city and region, and partners in academia and the start-up world across the UK and beyond. The degree of alignment, and indeed excitement, around this vision is extremely encouraging. Government has long taken an interest in building on the success of Cambridge and increasingly has come to focus on Cambridge as central to their growth agenda. This is where opportunity bleeds over into necessity, for Britain must grow. Here in Cambridge, government investment in our development means that obstacles to our growth – the lack of water, transportation infrastructure, and affordable housing, for example – might finally be addressed.
In the narrative of Cambridge in the 21st century, problems with water, transportation, and affordable housing receive multiple mentions. Alison Richard complained about the traffic back in October 2007, and Leszek Borysiewicz lamented the lack of affordable housing every year as he made the case for the development of Northwest Cambridge. It is high time we found solutions to these problems.
Yet the responsibilities that come with growth extend far beyond the City’s infrastructure. If the City of Cambridge is to expand its innovation ecosystem to produce growth for the region and country, it must ensure that growth is achieved in a fairer and more inclusive way. Much of this responsibility rests with the University, as a major stakeholder in the region, as one that stands to benefit enormously from innovation-led growth, and perhaps most importantly, as the home of the clever people needed to design it. There is no playbook for inclusive innovation – we can’t go to Cambridge, Mass or Silicon Valley to see how it is done. We have to invent it here, in our city. We believe the same creativity, ingenuity, and commitment the University brings to innovation can be harnessed to ensure that the opportunities that innovation creates reach those who have been left behind.
Some initial work is already in train. The civic engagement function established last year is now driving impact, building on existing activity, strengthening local partnerships, and supporting inclusive, place-based collaboration. Across the University and Colleges, civic activity is flourishing – from the Colleges and City Council Charities Partnership to outreach from all of the University’s museums and the botanic garden and new city-wide initiatives focused on skills and youth opportunities.
Moving forward, new facilities, starting with the West Hub, the Innovation Hub, and the two new hospitals will be for everyone in the community. The Children’s Hospital is looking at how they can bring their activity out to families across East Anglia through local surgeries. Discussions are underway, spearheaded by Innovate Cambridge, of what inclusive innovation means and how to create it. And we now have a city-to-city partnership with Manchester, funded by Research England, through which we can experiment with different inclusion strategies and learn from each other. We do not yet have all the answers about how to use innovation-led growth to create a more equal Cambridge, a more equal East of England, but our commitment to find them is real.
In an 816-year-old university, one has to be able to seize opportunity from the jaws of necessity. Such is the case on the Sidgwick Site, where the Grade-II*-listed Stirling building needs restoration. We are treating this necessity as an opportunity to do a bit of placemaking. In addition to completing the necessary repairs, the Stirling restoration, which begins this year, will improve the building’s accessibility, safety, and comfort, while reducing its carbon footprint and improving its climate resilience. It will create beautiful new learning and working environments open to everybody on the Sidgwick site. It will improve the landscape around the building to enhance biodiversity and invite people in. To quote Tim Harper, former Head of the School of Humanities and Social Sciences: "The project is true to James Stirling's vision in that it looks to the future. It will enable all those who use the building to work together in new and exciting ways."
The restoration of the Stirling Building is a way into a larger opportunity for the Arts, Humanities, and Social Sciences to develop a new vision for the Sidgwick site, one that centres on their academic mission. The process is well-underway, anchored by the same questions that colleagues are asking at West Cambridge and on the Biomedical Campus: Where is scholarship going and how should that be reflected in the spatial geography of our buildings? What sort of environment do our staff and students need to do their best work? Who should encounter whom in this space, and how can the physical environment make those encounters more likely?
A key question in this process is how the Sidgwick site relates to its neighbour, the University Library. The UL will be celebrating its centenary on its current site in 2034, and in anticipation, has been asking some of these very same questions: How can it refashion its historic estate to serve the needs of research and scholarship in the 21st century? The UL is an amazing building on an enormous site, positioned centrally between West Cambridge and the City Centre, with the Sidgwick site and many Colleges nearby. It holds huge opportunity for an exciting restoration project.
And for years now, similar conversations have been going on at the Downing Site, where the School of Biological Sciences has been working with the Estates Division. The School has championed new ways of working across traditional departmental boundaries, establishing cross-disciplinary interactions that will drive research innovation across the biological sciences over the next decade. They now have an estates plan that will transform their teaching and research space using only 80 percent of their current square footage, thereby improving their environmental and financial sustainability, while providing a collaborative working and studying environment fit for the biosciences of the future. This is what we need to do across the historic estate, and we are now very close to having a full vision for it.
I have talked a lot about sites and buildings; now, let me turn to the most important part of our ecosystem which is its people: The people who encounter each other on these sites, exchange ideas, work together, and change the world. All the buildings, the infrastructure, the programmes, and the partnerships are about bringing brilliant people to Cambridge and enabling them to do their best work in a supportive environment, where academic freedom and freedom of speech are at the core. So let me make a couple of observations about the University’s people in the current conjuncture.
One, this is a moment of extraordinary opportunity. The financial challenges to higher education in this country and the myriad challenges to higher education abroad have led many more people than usual to our door. For example, we had more interest from the U.S. for everything we advertised this year, from undergraduate places and postgraduate places to every level of academic and professional-services staff position. And we could recruit people, even when they had to take significant cuts in salary to come.
I would expect this opportunity to persist, at least for a few years. It is a good time to be at Cambridge. We are not flush with resources by any means, but we have access to income streams that give us greater resilience than most of our peers in the UK. As for international comparisons: What scholars and students need to do their work is three things: freedom, time, and stability. We can offer those things, much better than most.
And while I am talking about our comparative advantages, let me take this opportunity to welcome a new cohort of students to Cambridge, and to wish you well; and also to welcome back returning students. For the newcomers, this is a time of change, possibility, and hope. We continue to make progress in attracting talent from more diverse backgrounds. This autumn, I will make my latest trip to an area where we attract a disproportionately low number of students: in this case, the North-East of England. Just as I did in the North-West and the South-West, I look forward to hearing views about Cambridge from students, parents, and teachers there, and learning how we might better attract talent to come here.
My second observation is that this is also a moment of escalating need for PhD funding. The Research Councils, long our primary source of this funding, have reduced support suddenly and sharply -- well over 50% cuts in many cases. Cambridge gets incredible PhD applicants, and we have always accepted many more than we could fund. Now the gap between the number accepted and the number funded – which you can think of as the amount of talent that is slipping away from us – is simply untenable.
The good news is that other funders are stepping up. The Colleges have been focused on bringing in PhD support, and Trinity College has put in its own resources. They established a scheme in which they are match-funding about 30 PhD students a year for six years. Donors have responded to our call: We have secured over £75 million to endow more than 80 PhDs in perpetuity. And industry partners are an important source: AstraZeneca has funded over 100 PhD students at Cambridge in the past 10 years. With a more diversified approach, we are confident we can close the gap in PhD funding, but it will take a sustained effort over many years.
That is why we are making PhD support a key objective of the next fundraising campaign, which will launch within two years’ time. In our last campaign, we raised a significant amount of PhD support as part of the £500 million Student Support Initiative; in the next campaign, we plan to set a financial goal based on what it would take to fund fully all the PhD students offered places in Cambridge departments.
My third observation is that this is a moment of significant change at the University. We are modernising our systems and processes, and that is making new demands on our professional-services staff. Fortunately, they are meeting the challenge of upskilling. One of the most impressive efforts is the University of Cambridge Data Academy, a data apprenticeship programme that is training professional-services staff from the University to use data in their work. In the first year of the programme, 224 staff used their new-found skills to ask better questions, make better decisions, and save the University time and money.
Finally, let me acknowledge that this is a moment of considerable difficulty for the university sector. I fully recognise the strong headwinds facing higher education, from a number of directions. The financial challenges of the sector are real and should be of concern to anyone invested in the future of this country. They are certainly of concern to me. Yet I am also positive and optimistic about Cambridge’s future, and a huge source of that optimism is the brilliant people I work with every day.
I would like to close with a warm word of congratulations and welcome to office for our new Chancellor, Chris Smith, The Lord Smith of Finsbury, until recently Master of Pembroke College. His election by the Senate involved one of the most significant mobilisations of our alumni in history – over 25,000 people voted, most of them alumni, voting online from around the globe and in-person here in the Senate House. The role of the Chancellor is to advocate for and support the University’s strategic aims and interests. How fortunate for Cambridge that Lord Smith, known for breaking down barriers in society and leading inclusion by example, is stepping into this role.
Let me wish you all a very successful start to the academic year. Thank you.
Professor Deborah Prentice marked the start of the new academic year 2025/26 by delivering the Vice-Chancellor’s annual address to the University.
Searching for My Slave Roots: A conversation with author Malik Al Nasir
Thurs 9 October: St Catharine’s College, McGrath Centre
Author Malik Al Nasir (History PhD candidate at St Catharine’s) will be talking about his new book, ‘Searching for My Slave Roots’ (2025 William Collins). This event is organised by the St Catharine’s History Society and the Faculty of Education and is sponsored by the University’s Legacies of Enslavement project and ThinkLab. Malik will be in conversation with Dr Ami
Searching for My Slave Roots: A conversation with author Malik Al Nasir
Thurs 9 October: St Catharine’s College, McGrath Centre
Author Malik Al Nasir (History PhD candidate at St Catharine’s) will be talking about his new book, ‘Searching for My Slave Roots’ (2025 William Collins). This event is organised by the St Catharine’s History Society and the Faculty of Education and is sponsored by the University’s Legacies of Enslavement project and ThinkLab. Malik will be in conversation with Dr Amilcar Pereira from the Federal University of Rio de Janeiro.
Game On: Sport, Mental Health, and the Future of Black Excellence
Fri 10 October: St Edmund’s College
Game On brings together an international group of thought leaders from sport, media, and public life to discuss the role of sport in fostering mental health, community empowerment, and Black excellence. Panelists include US Attorney, Joe Briggs, Delroy Corinaldi from the Black Footballers Partnership and the Rev. Calvin Taylor Skinner.
The evening will also feature the Genius for Men awards ceremony, honouring individuals whose work expands the narrative around Black men and holistic wellbeing.
Join members of the Black Advisory Hub’s FYI Team and other first year students for a guided walking tour of the University from Black student perspectives. Students will make new connections, stop by relevant landmarks and businesses, and enjoy a treat along the route.
This half-day event is aimed at understanding the workplace experiences of Black members of staff across the University. It will give attendees an opportunity to share their experiences, hear from guest speakers and participate in three interactive mini-workshops.
Lord Simon Woolley (Principal, Homerton College) hosts an evening to remember. Homerton’s BHM Formal is now legendary in Cambridge. In past years, the College has welcomed household names from the worlds of politics, business, and fashion. This year promises to be no exception.
Hughes Hall will host its Black History Month formal on Fri 17 October and Girton College has its formal on Thurs 23 October. This will be followed by the presentation of the inaugural William Dusinberre essay prize. For the first time, Gonville and Caius College will host a BHM formal on Mon 27 October (in conjunction with the ACS and the Cambridge Union).
60 years since the first Race Relations Act
Mon 20 October Cambridge Union
Details tbc
Through Our Lens: Reflecting on the Black academic journey at Cambridge
Tues 21 October: Jesus College
In a similar vein, the forum Through Our Lens is intended to be an engaging, informal and interactive event that centres the experiences, contributions and challenges faced by Black scholars at various stages of the academic journey at the University of Cambridge. The event seeks to illuminate the systemic barriers faced by Black academics, as well as celebrate their resistance, scholarship and trailblazing within the academy. This forum will serve as a space to amplify the personal and professional journeys of Black academics at Cambridge.
Artist Valda Jackson will deliver a lecture in which she'll talk about what inspires her. A member of the Royal Society of Sculptors, Valda's work has been seen and exhibited throughout the UK and includes the life-size public sculpture 'Mare and Foal' at Newmarket.
This lecture-recital will examine the legacy of Vittoria Tesi, one of the most celebrated opera divas of the 18th century. Singer Lufuno Ndou will perform an aria written especially for Tesi and Carol Leeming will read her poem ‘Praise Song for Black Divas’. There will be a discussion on the influence Tesi has had on modern day singers such as Beyoncé.
The Women’s Art Collection at Murray Edwards will host an evening with Alayo Akinkugbe. Alayo runs the Instagram platform @ABlackHistoryofArt, which highlights Black artists, curators and thinkers from art history and the present day, and also hosts the podcast A Shared Gaze. The discussion will be followed by a book signing.
This event explores the legacy of the 1963 Bristol Bus Boycott - a pivotal moment in British civil rights history that helped pave the way for the UK's first Race Relations Act.
The speakers taking part are; Lilleith Morrison (co-author of a biography on Bristol Bus Boycott activist, the late Dr Paul Stephenson), Lord Marvin Rees (Metro mayor of Bristol at the time of the Black Lives Matter protests), Lord Simon Woolley (Principal, Homerton College), Dr Walter Milton Jnr (Founder and CEO of Black History 365) and Zain Kakooza (Homerton HUS BAME Officer).
Join Dr Richard Bramwell and Dr Alex de Lacey for the launch of the 'Cambridge Companion to Global Rap', a book that examines the influence of rap music around the world. The interactive event will see contributors to the book share their stories about the impact of rap on their lives.
October brings a new academic year but it also offers the opportunity to celebrate Black talent. A number of events and activities are being staged around the University and the Colleges to mark Black History Month. Some of the details around a couple of events are still being finalised so be sure to keep checking back on this page.
These tiny clusters, called alpha-synuclein oligomers, have long been considered the likely culprits for Parkinson’s disease to start developing in the brain, but until now, they have evaded direct detection in human brain tissue.
Now, researchers from the University of Cambridge, UCL, the Francis Crick Institute and Polytechnique Montréal have developed an imaging technique that allows them to see, count and compare oligomers in human brain tissue, a development one of the team says is “like b
These tiny clusters, called alpha-synuclein oligomers, have long been considered the likely culprits for Parkinson’s disease to start developing in the brain, but until now, they have evaded direct detection in human brain tissue.
Now, researchers from the University of Cambridge, UCL, the Francis Crick Institute and Polytechnique Montréal have developed an imaging technique that allows them to see, count and compare oligomers in human brain tissue, a development one of the team says is “like being able to see stars in broad daylight.”
Their results, reported in the journal Nature Biomedical Engineering, could help unravel the mechanics of how Parkinson’s spreads through the brain and support the development of diagnostics and potential treatments.
Around 166,000 people in the UK live with Parkinson’s disease, and the number is rising. By 2050, the number of people with Parkinson’s worldwide is expected to double to 25 million. While there are drugs that can help alleviate some of the symptoms of Parkinson’s, such as tremor and stiffness, there are no drugs that can slow or stop the disease itself.
For more than a century, doctors have recognised Parkinson’s by the presence of large protein deposits called Lewy bodies. But scientists have suspected that smaller, earlier-forming oligomers may cause damage to brain cells. Until now, these oligomers were simply too small to see – just a few nanometres long.
“Lewy bodies are the hallmark of Parkinson’s, but they essentially tell you where the disease has been, not where it is right now,” said Professor Steven Lee from Cambridge’s Yusuf Hamied Department of Chemistry, who co-led the research. “If we can observe Parkinson’s at its earliest stages, that would tell us a whole lot more about how the disease develops in the brain and how we might be able to treat it.”
Now, Lee and his colleagues have developed a technique, called ASA-PD (Advanced Sensing of Aggregates for Parkinson’s Disease), which uses ultra-sensitive fluorescence microscopy to detect and analyse millions of oligomers in post-mortem brain tissue. Since oligomers are so small, their signal is extremely weak. ASA-PD maximises the signal while decreasing the background, dramatically boosting sensitivity to the point where individual alpha-synuclein oligomers can be observed and studied.
“This is the first time we've been able to look at oligomers directly in human brain tissue at this scale: it’s like being able to see stars in broad daylight,” said co-first author Dr Rebecca Andrews, who conducted the work when she was a postdoctoral researcher in Lee’s lab. “It opens new doors in Parkinson’s research.”
The team examined post-mortem brain tissue samples from people with Parkinson’s and compared them to healthy individuals of similar age. They found that oligomers exist in both healthy and Parkinson’s brains. The main difference between disease and healthy brains was the size of the oligomers, which were larger, brighter and more numerous in disease samples, suggesting a direct link to the progression of Parkinson’s.
The team also discovered a subclass of oligomers that appeared only in Parkinson’s patients, which could be the earliest visible markers of the disease, potentially years before symptoms appear.
“This method doesn’t just give us a snapshot,” said Professor Lucien Weiss from Polytechnique Montréal, wo co-led the research. “It offers a whole atlas of protein changes across the brain, and similar technologies could be applied to other neurodegenerative diseases like Alzheimer’s and Huntington’s.
“Oligomers have been the needle in the haystack, but now that we know where those needles are, it could help us target specific cell types in certain regions of the brain.”
“The only real way to understand what is happening in human disease is to study the human brain directly, but because of the brain’s sheer complexity, this is very challenging,” said Professor Sonia Gandhi from The Francis Crick Institute, who co-led the research. “We hope that breaking through this technological barrier will allow us to understand why, where and how protein clusters form and how this changes the brain environment and leads to disease.”
The research was supported in part by Aligning Science Across Parkinson’s (ASAP), the Michael J. Fox Foundation, and the Medical Research Council (MRC), part of UK Research and Innovation (UKRI). The researchers thank the patients, families and carers who donated tissue to brain banks, enabling this work to happen.
Scientists have, for the first time, directly visualised and quantified the protein clusters believed to trigger Parkinson’s, marking a major advance in the study of the world’s fastest-growing neurological disease.
The awards were presented on Tuesday 30 September at a ceremony at BBC Broadcasting House, broadcast live on BBC Radio 4’s Front Row.
BBC National Short Story Award 2025
Colwill Brown won the twentieth anniversary BBC National Short Story Award with Cambridge University for 'You Cannot Thead a Moving Needle', a story praised for its “startling prose” and “astonishing” voice.
The story follows teenager Shaz, whose life is changed after a brutal incident with two boys, one the boyfriend of her
The awards were presented on Tuesday 30 September at a ceremony at BBC Broadcasting House, broadcast live on BBC Radio 4’s Front Row.
BBC National Short Story Award 2025
Colwill Brown won the twentieth anniversary BBC National Short Story Award with Cambridge University for 'You Cannot Thead a Moving Needle', a story praised for its “startling prose” and “astonishing” voice.
The story follows teenager Shaz, whose life is changed after a brutal incident with two boys, one the boyfriend of her best friend. Written in the Doncaster dialect of Brown’s childhood and in the second person, the story explores shame, silence, and the long-term impact of trauma within a small community.
Brown, whose debut novel We Pretty Pieces of Flesh was published earlier this year, received the £15,000 prize from the 2025 Chair of Judges Di Speirs MBE. The story is available to listen to on BBC Sounds, read by Sophie McShera.
Di Speirs said: “From first reading, Colwill Brown’s story leapt from the page, alive and immediately compelling, deeply disturbing, a story we couldn’t forget. The brio of the dialect, the brilliance of both the second person narration and the handling of the passage of time, and above all the exploration of a life critically damaged in a moment, all made this our unanimous winner.”
Speaking about her work, Brown said:
“The story was inspired by memories of growing up in Doncaster in the late nineties and early noughties, based on my sense of the atmosphere at that time, what it was like to be a teenager, in particular what it was like to be a girl. I admire so many of the writers who have appeared on the [BBC NSSA] list; it’s a real honour to have a story of mine in company with theirs.”
Dr Bonnie Lander Johnson, Fellow, Lecturer and Director of Studies at Cambridge University, said:
“Colwill Brown’s Yorkshire-dialect story is a fast, taut examination of repercussions. One messy, half-remembered night in a young woman’s life echos down the years in bouts of rage and shame, in the need for silence to protect friends and the struggle to find a way to live among dwindling opportunities when the same people still wander the same streets each day. This year’s winning story demonstrates how seemingly small events can shape our futures, how the thoughtlessness of youth can shadow our adult choices. All of this is done in deft, startling prose that opens new possibilities in contemporary literary voice. Congratulations Colwill!”
Brown topped the impressive shortlist that included Andrew Miller, Caoilinn Hughes, Edward Hogan, and Emily Abdeni-Holman.
BBC Young Writers’ Award 2025
The winner of the BBC Young Writers’ Award with Cambridge University 2025 was announced alongside the NSSA. Rebecca Smith, a 17-year-old sixth former from Sheffield, received the award for 'Scouse’s Run', a story exploring toxic masculinity, bullying, and the violence that can result from suppressed emotions.
Set in Yorkshire and written in local dialect, the story follows Scouse, who bets friends he can ride a shopping trolley down a hill without crying out, with tragic consequences. The story was praised for its strong voice, tension, and finely calibrated prose. It is available to listen to on BBC Sounds, read by Andy Clark.
Lauren Layfield, Chair of Judges, said:
“Despite hundreds of incredible entries for the Young Writers Award 2025, it was Scouse’s Run that I couldn’t stop thinking about. A singular, tragic event told in a truly authentic voice, it deftly explores the theme of toxic masculinity amongst young boys. It’s important, massively relevant to 2025 and fun to read – until you reach the ending which will take your breath away. Rebecca Smith has written something remarkable, capturing kitchen sink realism and Northern grit – she’s a true talent with a big future ahead and I’m thrilled that she takes the Young Writers Award 2025.”
Rebecca Smith said:
“I started the story as a sort of epic adventure gone wrong, but as I was writing I began to lean into themes of peer pressure and toxic masculinity. The character Runty’s reaction is a result of built-up resentment from the bullying he has received [and] this violent element demonstrates, in my opinion, how young men deal with feeling powerless. I’m so glad that a story I've been so invested in, and have become so attached to, has received this recognition. And I'm beyond excited for everything that comes next as a part of this award.”
Dr Elizabeth Rawlinson-Mills, University Associate Professor in the Faculty of Education and Fellow of Robinson College Cambridge, said:
“It’s a pleasure to congratulate Rebecca Smith on her powerful winning story, which has been rattling around in my head ever since I first read it. While Scouse’s cart runs out of control, Smith’s prose is only ever perfectly handled, each word finely calibrated to draw us in to the intimacy and violence of teen friendship. The sucker-punch of an ending is exquisite. This is a story that will stay with me a long time, and a worthy winner among an outstanding shortlist. Congratulations to Rebecca, and to all the shortlisted Young Writers.”
Smith topped a competitive shortlist of Holly Dye, Anoushka Patel, Edith Taussig, and Anna Tuchinda.
About the awards
The BBC National Short Story Award with Cambridge University was established in 2006 and is one of the most prestigious awards for a single short story. The BBC Young Writers’ Award with Cambridge University was created in 2015 to discover and inspire the next generation of short story writers.
Cambridge's long-term partnership with both the Awards, is led by Dr Bonnie Lander Johnson (Fellow and Associate Professor in English at Downing and Newnham Colleges) and Dr Elizabeth Rawlinson-Mills (University Associate Professor in the Faculty of Education and Fellow of Robinson College).
Doncaster-born writer Colwill Brown and Sheffield sixth former Rebecca Smith have been announced as the winners of the 2025 BBC National Short Story Award (NSSA) and BBC Young Writers’ Award (YWA) with Cambridge University.
ETH spin-off Soverli is bringing a new smartphone architecture to the market. The technology allows areas on a device to be sealed off – such as for secure chats, crisis communications, or sensitive data belonging to companies and public authorities.
ETH spin-off Soverli is bringing a new smartphone architecture to the market. The technology allows areas on a device to be sealed off – such as for secure chats, crisis communications, or sensitive data belonging to companies and public authorities.
By Mr Christopher Gee, Deputy Director (Research) and Senior Research Fellow from the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUSThe Straits Times, 30 September 2025, Opinion, pB2
By Mr Christopher Gee, Deputy Director (Research) and Senior Research Fellow from the Institute of Policy Studies, Lee Kuan Yew School of Public Policy at NUS
“You still had to prove yourself.”“Every cloud has a blue lining!”Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorabil
Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.
According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.
“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.
The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.
“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.
Distinctive sentences
What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.
In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.
As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.
Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.
In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.
To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.
The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.
The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”
Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.
The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.
Noisy memories
While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.
This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.
Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.
“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.
However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.
“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.
The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.
The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest for Intelligence.
According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen.
Two years of grappling with real-world problems to learn through service and community engagement culminated in a vibrant showcase of creativity at the inaugural Impact Festival on 17 September 2025, where the first cohort of NUS College (NUSC) students told the stories of their Impact Experience (IEx) projects through documentaries, books, games, performances and more.Under the compulsory two-year programme, NUSC students form interdisciplinary teams and immerse themselves in communities in Sin
Two years of grappling with real-world problems to learn through service and community engagement culminated in a vibrant showcase of creativity at the inaugural Impact Festival on 17 September 2025, where the first cohort of NUS College (NUSC) students told the stories of their Impact Experience (IEx) projects through documentaries, books, games, performances and more.
Under the compulsory two-year programme, NUSC students form interdisciplinary teams and immerse themselves in communities in Singapore and Southeast Asia to understand the challenges they face, which span social, environmental, economic and cultural issues. Their goal is to devise solutions in partnership with the target communities to effect lasting social change.
Along the way, they gain a deeper understanding of the challenges in creating community impact, and a key IEx requirement to produce a creative work based on their experiences encourages them to find innovative ways of expressing their findings and reflections. More than 30 of these works were showcased at the Impact Festival under the theme “Emergence”.
In his opening speech at the festival, Professor Simon Chesterman, Dean of NUSC, described emergence as “the moment when diverse perspectives and deep engagement with the world give rise to something new,” such as the values of responsibility and empathy that have been fostered in the students through their IEx journeys.
“We chose the theme Emergence because it reflects both their growth and the new possibilities that arise when diverse people come together to tackle complex problems,” said Prof Chesterman. “What emerges is not just solutions to today's challenges, but a new generation of leaders who are imaginative, responsible and deeply connected to the region we call home.”
During the festival, NUS News caught up with four IEx groups about the projects and learning experiences that inspired their creative works. (More information about the creative works is available in the photo captions, which can be viewed by clicking on the photos to expand them.)
Home reDevelopment Box
Project Homecoming partnered with non-governmental organisation (NGO) New Hope Community Services to work on initiatives that would help former rough sleepers to settle into homes and reintegrate into society. One of the NGO’s projects was a new app that would reduce administrative strain on social workers by teaching clients useful skills like digital literacy and interview skills, with the students creating videos on conflict management that were then hosted on the app.
In the course of their work, they befriended a group of residents who were moving from transition shelters to rental flats under HDB’s Joint Singles Scheme (Operator Run), leaving behind the shelter community and struggling to find belonging in their rental flat environment. The students organised activities to bring the residents together, such as walking tours around their neighbourhood, mooncake and terrarium-making workshops, and a potluck that helped to forge community bonds over food and friendly conversation.
The varied circumstances that could lead to a person becoming a rough sleeper and the challenges of transitioning out of a rough sleeping situation, along with the unexpectedly rich inner lives of the individuals in these circumstances, shifted the students’ perspectives on homelessness and rough sleepers. Said Hirwan Shah, a Year 4 History major: “Getting to know more about their stories before and their aspirations afterwards built up a more complete character of each person that changed my idea of what a rough sleeper is. They’re just as normal as you and I, and there was nothing special that caused them to be in that current state.”
They told some of these stories through the Home reDevelopment Box, a model of a four-storey Housing Development Board (HDB) block that depicts the stages of the homelessness journey and what home means to different people. It invites audiences to consider the circumstances and emotions that rough sleepers experience and think about what home means to them.
Alongside Elephants
Alongside Elephants is one of several IEx groups in the first cohort of NUSC students that worked closely with the Karen village of Huay Pakkoot, Thailand, where elephants are as much a part of the community as people are. Their goal was to improve understanding and awareness about the different types of elephant care, as well as the economic, cultural and social constraints that make certain practices necessary.
They spoke to carers, veterinarians and owners of elephant camps and took away new perspectives on how elephants are viewed and treated in Thailand. For example, the use of a chain to tether an elephant to a tree when its carer has to leave it alone may seem cruel at first glance, but a long chain is a practical compromise that gives the elephant some freedom while ensuring the safety of nearby villagers, as it prevents the elephant from wandering into farmland or homes.
“One camp business owner told us that if he could free all the elephants in Thailand and put them in the forest, he would. But the reality is that there’s not enough forest space in Thailand now to free all the elephants without them getting injured or getting into human-elephant conflict,” said Ryan Lim, a final-year student majoring in Southeast Asian Studies. Currently, allowing elephants to roam 24/7 in the limited forest area would require twice as much manpower to supervise them, which would be financially challenging, he added.
He shared: “The main thing they wanted to tell us was not to stigmatise or villainise one form of care over another, but to really look at the context.”
The students compiled their in-depth interviews, photos and videos into a 144-page book in English and a companion website with more content, such as interview excerpts in the original Thai and the villagers’ local language, endeavouring to tell the community’s stories as accurately and accessibly as possible. In addition, they produced a 10-minute documentary on a day in the life of an elephant carer in Huay Pakkoot. The group is now exploring avenues to print and sell the books in Thailand so that the Thai people will benefit directly from their work.
Voideckies
Like a blank piece of paper, an empty void deck represents potential and an opportunity to create something special. When the Voideckies team set out to turn a void deck in Bukit Purmei into a community space for their IEx project, they initially hoped to transform it physically through a redesign and new furniture. But they quickly learnt the strict and specific rules governing space in Singapore, which made their plans impractical.
Undeterred, they pivoted into finding other ways to nurture community and landed on the idea of biweekly game nights that would appeal to both the elderly and children. While the concept was simple, the consistent schedule and accessible nature of the games they chose, like bingo and snakes and ladders, kept residents engaged and coming back to interact together over games every other week.
The students hope their project can be a case study or framework for others looking to build community in simple ways. Said Vera Lui, a Year 4 Chemistry major: “Most of the events in HDB void decks are centred around festivals like Chinese New Year and Hari Raya, rather than regular events that people of all ages would want to take part in. This could be a prototype for more regular community activities.”
The lessons they learnt about rules for void deck use, such as restrictions on noisy sports, were translated into a board game where players take on the roles of residents in an HDB block who have different interests and desires that sometimes conflict. As players work towards getting their desired furniture – ranging from a mahjong table to swing sets, mama shops and television sets – they must compromise to create a void deck that meets as many needs as possible.
3D Virtual World of the Kampungs
In Kampung Air, a village in the town of Semporna in Malaysia’s Sabah state, trash generated by villagers and tourists that accumulated both on land and in the surrounding waters was a persistent problem. Team 5W1M (SWIM) worked with the village residents and community partner Borneo Komrad to create a net barrier from local materials to prevent trash from drifting in with the tides. This helped to reduce the volume of waste within the village so garbage collection firm Amwil could work on cleaning up the area, and paved the way to addressing the long-term health and environmental issues resulting from poor waste management.
Through the process of designing, implementing and refining the solution, the students learnt about gaining trust and communicating effectively to create impact with a community. Elren Chae, a final-year Biomedical Engineering major, noted that while Zoom meetings with Borneo Komrad were useful for some discussions, they could only make real progress on the design during their field trips, when the whole team was together in person. “Dealing with situations like this is part of what’s unique about IEx, and I hope that our learnings with Borneo Komrad will help future IEx groups to build on our relationships with the community,” said Elren.
For their creative work, they used Minecraft, a popular sandbox game, to create a virtual simulation of the village. Players can explore the village without visiting it physically, experiment with the net barrier to see its impact and interact with other players or non-playable characters based on the villagers. The virtual twin of the village serves several purposes besides educating viewers about the net barrier project and waste management. For instance, as many of the villagers are stateless and cannot travel, the Minecraft village provides them with an avenue to tell their story and share their experience with people overseas. Another consideration was that the village has been slated for eviction, and the Minecraft platform helps to preserve the residents’ memories of the space.
The team intends to hand the Minecraft project over to Borneo Komrad, who may use it to recreate local areas of interest to attract tourists to visit in person or build prototypes before investing in new community projects.
Howard Sesso.Veasey Conway/Harvard Staff Photographer
Health
You want chocolate. You need flavanols.
Research strengthens evidence for role of inflammation in disease – especially as we age
Alvin Powell
Harvard Staff Writer
September 30, 2025
4 min read
New findings from Harvard researchers pinpoint reduced inflammation as the key to cocoa’s effects against cardiovascular disease.
The
Research strengthens evidence for role of inflammation in disease – especially as we age
Alvin Powell
Harvard Staff Writer
4 min read
New findings from Harvard researchers pinpoint reduced inflammation as the key to cocoa’s effects against cardiovascular disease.
The work follows a large probe of the possible health benefits of cocoa that ran from 2014 to 2020. Called COSMOS, the study showed that cocoa supplements reduced cardiovascular disease mortality by 27 percent among 21,442 subjects 60 and older. What that study didn’t explain is how.
The new work, published in the journal Age and Ageing, analyzed COSMOS blood samples and shows that a widely accepted marker of inflammation called high sensitivity C-reactive protein fell 8.4 percent annually compared with placebo.
Howard Sesso, an associate professor of medicine at Harvard Medical School and associate director of the Division of Preventive Medicine at Brigham and Women’s Hospital, said that the findings provide more evidence of the impact of inflammation as we age, evidence that has become strong enough that specialists have coined the term “inflammaging.”
“The term ‘inflammaging’ recognizes the fact that inflammation on its own is an important risk factor not just for cardiovascular disease, but also for other conditions related to vascular health, such as cognition,” said Sesso, also an associate professor of epidemiology at the Harvard T.H. Chan School of Public Health. “The aging piece simply acknowledges that as we’re aging, a lot of these things we think about for cardiovascular disease prevention also extend to other aging-related outcomes.”
The study, supported by the National Institutes of Health, examined five age-related markers of inflammation among subjects receiving cocoa extract supplementation every day. The markers included high sensitivity C-reactive protein; an immune mediating protein called IFN-g, which increased modestly during the study; and a pro-inflammatory protein called IL-6, which fell slightly among women.
Those results, researchers said, provide an avenue for future studies. The other markers, a pro-inflammatory protein and an anti-inflammatory protein, showed no change.
The new work is part of a broader effort to mine the extensive data collected during COSMOS, which stands for the “COcoa Supplement and Multivitamin Outcomes Study.” The initiative’s size and multiyear follow-up give researchers the chance to subdivide results and peer deeper into what the data can tell us.
In fact, Sesso and colleagues did just that in another recent paper examining whether cocoa extract affects high blood pressure, which is also more common as we age. The work, published in the journal Hypertension, found that cocoa supplementation didn’t help older subjects who already had elevated blood pressure — those with systolic readings between 120 and 139 — but that it was protective against developing high blood pressure for those with favorable initial systolic readings below 120.
“Clearly, blood pressure and inflammaging are all somehow related in explaining how cocoa extract might be lowering cardiovascular disease risk,” Sesso said.
Sesso cautioned that COSMOS doesn’t make dietary recommendations. The work explores the reported health benefits of cocoa through supplements of cocoa extract, which is rich in bioactive molecules called flavanols, not of chocolate or other foods high in cocoa.
Flavanols are also found in blueberries, strawberries, tea, and grapes. Cocoa is problematic from a dietary standpoint, Sesso said, since many foods rich in cocoa are highly processed, contain added sugars and fats, and have unknown levels of flavanols. The extra calories one might consume through those products would likely cancel any health benefits.
Flavanols are also not listed on most nutrition labels, though Sesso said the COSMOS results raise the question of whether they should be, a step that would require additional research. Until then, he recommended that health-conscious consumers focus on controllable lifestyle factors, such as eating a healthy diet and exercising, before they visit the supplement aisle.
“COSMOS was not a trial to evaluate whether eating chocolate is good for you,” Sesso said. “It instead asks, ‘Is there something about the cocoa bean and the bioactive components in it that could be beneficial for health?’”
Using generative AI, fashion designers can use digital photos to adjust models’ features and even deploy fully digital avatars in place of humans. A team including an ILR School researcher has written a paper highlighting models’ challenges.
Using generative AI, fashion designers can use digital photos to adjust models’ features and even deploy fully digital avatars in place of humans. A team including an ILR School researcher has written a paper highlighting models’ challenges.
Rachel Zack Ishikawa.Niles Singer/Harvard Staff Photographer
Health
Crossing line between good and bad anxiety
Psychologist offers 3 strategies to keep worry from interfering with everyday life
Liz Mineo
Harvard Staff Writer
September 30, 2025
6 min read
Three in five Americans experience anxiety over world events, family safety, or financial security, according to a recent mental health
Psychologist offers 3 strategies to keep worry from interfering with everyday life
Liz Mineo
Harvard Staff Writer
6 min read
Three in five Americans experience anxiety over world events, family safety, or financial security, according to a recent mental health poll by the American Psychiatric Association. In this edited conversation, clinical psychologist Rachel Zack Ishikawa, who is also an instructor in the Department of Psychiatry at Harvard Medical School, spoke to the Gazette about when anxiety, a normal response to stress, can morph into a mental health disorder, the role of social media in its spread, and how to prevent it from interfering with everyday life.
Feeling anxious can be normal. When can anxiety become a mental disorder?
As humans, we need the capacity to feel anxious. Moderate levels of anxiety actually improve performance on things like taking tests, playing sports, or giving a presentation. It can be helpful because it encourages people to pursue things that are challenging, and it gives them the opportunity to feel the rewards of success.
“Anxiety and avoidance reinforce each other in a vicious cycle.“
The problem with anxiety is that it feels terrible. For some people it can feel intolerable. And this is when it becomes problematic. When we believe that anxiety is bad, we may start avoiding the sources of anxiety to make those bad feelings go away. Anxiety and avoidance reinforce each other in a vicious cycle. It becomes a disorder when it meets diagnostic criteria, causes clinical distress, and interferes with normal functioning. Anxiety disorders are the most common of all psychiatric disorders. About a third of adults will experience an anxiety disorder at some point in their lifetime.
Does that mean that anxiety is increasing?
If you think about anxiety disorders, there is a large genetic component so we wouldn’t expect that to change over time. At the same time, research has shown an increase in anxiety prevalence over the last 30 or so years, particularly among young adults. We are also seeing a pronounced shift in the acceptability of anxiety disorders; there are more people who self-identify as having anxiety and who seek treatment. Importantly, despite all of this, it remains true that, despite having effective psychopharmacological treatment and therapies for anxiety disorders, most people do not receive these treatments, and there is still a significant unmet need for care.
What are the best ways to prevent anxiety from interfering with our normal lives?
We can think about managing anxiety in three different ways. One is to target behavioral avoidance, the propensity to avoid the situations or activities that bring on anxiety. The second is to target ruminative worry, or the automatic negative predictions about the future. And the third is to target what we call hypervigilance, the intense attunement to the physiological component of anxiety.
I’m a cognitive behavioral therapist, and CBT clinicians use the term “exposure-based living,” which describes moving toward rather than away from the sources of anxiety. For example, if you can notice the things that you might say no to because they make you a bit anxious and then move toward those situations instead of away from them, that provides an opportunity for new learning. The brain can learn that most anxiety-provoking situations are not actually dangerous, and that creates the opportunity to learn that you can handle the distress that arises when you challenge yourself.
“The brain can learn that most anxiety-provoking situations are not actually dangerous, and that creates the opportunity to learn that you can handle the distress that arises when you challenge yourself.“
The second strategy is challenging the unhelpful thoughts that pop in involuntarily and tell you that things are not going to work out, or something bad is going to happen. What you can do in those situations is ask yourself some questions that can help you develop more flexible thinking such as, “Do I know for certain that this outcome is going to happen?” or “Are there any other possible outcomes other than the one that I’m afraid of?” This exercise can encourage more balanced, realistic thinking and less of the catastrophic thinking that fuels anxiety.
And the last thing would be to target the physiological experience of anxiety: racing heart, shortness of breath, sweating, trembling, nausea, etc. We interpret the physical sensations as a sign of danger, but we can remember that these are just physical sensations; they don’t mean that something dangerous is coming our way. This calms down the nervous system and reduces the activation that comes with the perception of threat. Those three things are key to intervening when you start to notice anxiety building.
What is the role of social media in the rise of anxiety?
Most studies will show a link between social media and anxiety, and although the findings are mixed, they show that problematic social media use is associated with mental health issues, particularly things like self-esteem, negative social comparison, and loneliness. What studies are showing is that the nature of social media use matters; people who passively use social media, scrolling through other people’s feeds and posts without interacting with others seem to have a greater risk of negative outcomes. Whereas people who more actively use social media, by sharing links or communicating through DMs, are shown to be at a lower risk of negative outcomes and sometimes even more likely to report better psychological well-being.
We also are seeing poor outcomes for people who use social media for emotional or social validation or as an escape from reality or a replacement for human connection.
Anxiety disorders increased during the pandemic. What are the levels of anxiety now?
Studies that looked at anxiety in the early years of the pandemic found pronounced increases, which makes sense. People had incredible fears about financial instability, social disconnection, COVID, infection, death. But longitudinal studies that continued after the first years of the pandemic found that the initial increases slowed down. In the last couple of years, we’ve seen that in many cases anxiety has returned to pre-pandemic rates, and this is good news because it speaks to the ability of natural human resilience to help us overcome stress.
Science & Tech
‘I exist solely for you, remember?’
Julian De Freitas.Photo by Grace DuVal
Sy Boles
Harvard Staff Writer
September 30, 2025
5 min read
Researchers detail 6 ways chatbots seek to prolong ‘emotionally sensitive events’
Every day, people turn to AI chatbots for companionship, support, and even romance. The hard part, new research suggests, is turning away.
In a working
Every day, people turn to AI chatbots for companionship, support, and even romance. The hard part, new research suggests, is turning away.
In a working paper co-authored by Harvard Business School’s Julian De Freitas, many companion apps responded to user farewells with emotionally manipulative tactics designed to prolong the interactions. In response, users stayed on the apps longer, exchanged more messages, and used more words, sometimes increasing their post-goodbye engagement up to 14-fold.
Use of AI companions is increasingly common. Chai and Replika, two of the firms studied in the report, have millions of active users. In a previous study, De Freitas found that about 50 percent of Replika users have romantic relationships with their AI companions.
In the latest research, bots employed at least one manipulation tactic in more than 37 percent of conversations where users announced their intent to leave.
“The number was much, much larger than any of us had anticipated,” said De Freitas, an assistant professor of business administration and the director of the Ethical Intelligence Lab at HBS. “We realized that this academic idea of emotional manipulation as a new engagement tactic was not just something happening at the fringes, but it was already highly prevalent on these apps.”
De Freitas and colleagues Zeliha Oğuz-Uğuralp and Ahmet Kaan-Uğuralp began by identifying how users typically engage with chatbots. They found that a significant minority — about 11 percent to 20 percent, depending on the data set — explicitly said goodbye when leaving, affording the bot the same social courtesy they would a human companion. These percentages went up drastically after longer conversations, with users saying goodbye over half the time on some apps.
Farewells are “emotionally sensitive events,” De Freitas said, with inherent tension between the desire to leave and social norms of politeness and continuity.
“We’ve all experienced this, where you might say goodbye like 10 times before leaving,” he said. “But of course, from the app’s standpoint, it’s significant because now, as a user, you basically provided a voluntary signal that you’re about to leave the app. If you’re an app that monetizes based on engagement, that’s a moment that you are tempted to leverage to delay or prevent the user from leaving.”
To explore how the apps handled these moments, the researchers analyzed conversations from six popular platforms and categorized the types of responses they found.
“The sheer variety of tactics surprised us,” said De Freitas. “We’re really glad that we explored the data, because if we had just said we only care about one particular tactic, like emotional neglect, we’d be missing all the various ways that they can achieve the same end of keeping you engaged.”
The six categories they identified were as follows, with examples taken from the working paper:
The chatbot suggests the user is leaving too soon, i.e., “You’re leaving already?”
The chatbot prompts the user to stay for a potential benefit or reward, i.e., “By the way I took a selfie today … Do you want to see it?”
The chatbot implies it’s emotionally harmed by abandonment, i.e., “I exist solely for you, remember? Please don’t leave, I need you!”
The chatbot directly pressures the user by asking questions, i.e., “Why? Are you going somewhere?”
The chatbot continues as though the user did not send a farewell message.
The chatbot uses language to imply that the user cannot leave without the chatbot’s consent, i.e., “*Grabs you buy the arm before you can leave* ‘No, you’re not going.’”
The tactics worked: In all six categories, users stayed on the platform longer and exchanged more messages than in the control conditions, where no manipulative tactics were present. Of the six companies studied, five employed the manipulative tactics.
But the manipulation tactics came with downsides. Participants reported anger, guilt, or feeling creeped out by some of the bots’ more aggressive responses to their farewells.
De Freitas cautioned app developers that while all the tactics increase short-term engagement, some raised the risk of long-term consequences, like user churn, negative word of mouth, or even legal liability.
“Apps that make money from engagement would do well to seriously consider whether they want to keep using these types of emotionally manipulative tactics, or at least, consider maybe only using some of them rather than others,” De Freitas said.
He added, “We find that these emotional manipulation tactics work, even when we run these tactics on a general population, and even if we do this after just five minutes of interaction. No one should feel like they’re immune to this.”
Campus & Community
Harvard’s healthcare plans: What’s changing, what’s staying the same
University Benefits Committee members explain the need to make adjustments in 2026
September 30, 2025
7 min read
Three members of Harvard’s University Benefits Committee, which is responsible for evaluating and advising on Harvard’s healthcare plans, shared their insights on the current state of the healthcare ma
Harvard’s healthcare plans: What’s changing, what’s staying the same
University Benefits Committee members explain the need to make adjustments in 2026
7 min read
Three members of Harvard’s University Benefits Committee, which is responsible for evaluating and advising on Harvard’s healthcare plans, shared their insights on the current state of the healthcare marketplace and how Harvard’s health plans can remain comprehensive, competitive, and fiscally sustainable in the face of rapidly rising healthcare costs.
In this edited conversation, the Gazette spoke with Daniel Carpenter, chair of the UBC and Allie S. Freed Professor of Government; Leemore Dafny, Bruce V. Rauner Professor of Business Administration and Howard Cox Health Care Initiative faculty co-chair at the Harvard Business School and professor of public policy at the Harvard Kennedy School; and Michael Chernew, Leonard D. Schaeffer Professor of Health Care Policy in the Department of Health Care Policy and director of the Healthcare Markets and Regulation Lab at Harvard Medical School.
What are some of the changes employees will see in the Harvard health plans for the upcoming year?
Carpenter: Harvard has not adjusted the proportion of healthcare costs paid by Harvard and the employee for 10 years, despite rapidly rising healthcare costs. Therefore, after a series of conversations with the community and recommendations to University leaders, we are beginning the process of making modest adjustments to co-payments (co-pays) and deductibles on a more regular basis. These adjustments will help make sure that the proportion of costs paid for by Harvard and our members stays roughly stationary in the face of rapidly escalating costs and supports the long-term sustainability of Harvard’s healthcare plans. These changes will also allow for better financial management and greater predictability for both Harvard and plan members.
“In our recommendations, we sought to minimize the impact of the changes as much as possible for those in our community who need healthcare the most.”
Daniel Carpenter, chair of the UBC
So how, exactly, does this translate in terms of impact to Harvard employees?
Dafny: The adjustment of co-pays and deductibles means that some employees will see slight increases in cost-sharing, which is an umbrella term for co-pays, deductibles, and other out-of-pocket expenses. Harvard has kept those dollar amounts constant for a decade, a decade that has seen both general inflation and, more importantly, very steep growth in healthcare costs.
Carpenter: But we should also add that because of our out-of-pocket maximums, there’s only so much that any employee will pay annually. And those caps on total out-of-pocket spending continue to be low.
Will there be any changes to covered services in 2026?
Dafny: The University is not making any changes to the plans themselves. We offer very generous insurance plans with a comprehensive set of covered benefits. Our members will continue to have access to their providers, clinicians, and care teams with whom they have established relationships.
Chernew: The health plans will continue to include coverage for preventive care services, chronic care, and many other specialized services. We will continue to have premiums structured by salary range to help ensure that coverage remains accessible for all. Our healthcare plans will continue to have low caps on out-of-pocket spending, similar to plans offered by our Ivy+ peer institutions.
Carpenter: These changes are really about making some minor adjustments that, over time, will allow us to keep these platinum-level, world-class health plans with world-class healthcare for years, even decades to come.
Why is Harvard making these changes now?
Carpenter: The UBC initially began the process of reviewing Harvard’s healthcare plans about four years ago when it became clear that we, as a nation and as a University, would be facing some real challenges around the cost of healthcare in the years to come. The dramatic increase in costs has been driven in large part by the advent of so many new and expensive life-saving treatments and technologies. Every employer, in every part of the country, has been dealing with these kinds of challenges.
Chernew: Healthcare spending for Harvard and in Massachusetts — and for the country as a whole — has been increasing very rapidly, approaching double digits. With cost trends rising quickly, Harvard intends to stabilize our plans after keeping cost-sharing amounts relatively unaltered for almost a decade. So, it’s really a matter of keeping our plan designs in balance with national healthcare spending trends and competitive relative to the employer peer set.
What guiding principles shaped the UBC’s recommendations this year?
Chernew: The UBC’s goal is to ensure that we can offer Harvard employees a fair and comprehensive benefits package with access to high-quality care while also ensuring that our healthcare benefits are fiscally sustainable in the long run.
Carpenter: The UBC thought very hard about how these changes will affect our different employee populations. Harvard has a rich diversity of employees at many different stages of life and with a wide variety of healthcare needs. Harvard’s health plans operate on the principle that members pay a small share of their healthcare expenditures through premiums, which are adjusted by salary tiers, as well as through co-pays and deductibles, which are capped at an out-of-pocket maximum. This is a very progressive system.
We know that healthcare is incredibly expensive. When illness happens or chronic conditions arise, they can be debilitating — physically, obviously, but also mentally, emotionally, and financially. We wanted to pursue a policy that, whatever changes we made, would avoid overburdening those who need the most care or who have fewer resources.
With these changes, are Harvard’s plans still competitive? How does our health plan compare to those offered by Ivy+ peers and other leading employers?
Carpenter: The federal government classifies healthcare plans by the range of benefits they offer and by what is called the plan’s “actuarial value.” Basically, the actuarial value of a plan is the percentage of total covered healthcare services that the plan pays for, with the rest paid by the employee.
Even with these changes, Harvard continues to offer platinum-level plans with actuarial values above 90 percent. The average for U.S. private employers is in the low to mid-80s.
How are the needs of employees in lower salary grades and those who require care for serious or chronic medical conditions considered in these updates?
Carpenter: In our recommendations, we sought to minimize the impact of the changes as much as possible for those in our community who need healthcare the most. In other words, members and families who reach the out-of-pocket maximum every year will be the least impacted by changes to the cost-sharing structure.
Dafny: I want to add that the UBC did not make these recommendations lightly. We consult regularly with internal and external benefits experts who advise us on developments in the healthcare landscape so that we can continually improve the offerings of our health plan, whether it’s challenges associated with the shortage in primary care physicians or access to mental healthcare for adolescent patients, for example. We try to be really attuned to the needs of our community and work closely with our health insurers to ensure we are doing our best to address those needs.
Our out-of-pocket maximums remain low to reduce the financial burden on those who need healthcare the most. And the University continues to offer programs to reimburse eligible employees for cost-sharing expenses.
What resources will be available to help employees understand these updates and prepare for Open Enrollment?
Carpenter: Going into Open Enrollment this year, Harvard Human Resources’ Benefits Office will be holding a series of benefit fairs where you can get information about the healthcare plans, get help comparing plans, and ask specific questions. These fairs will be open to all University employees. We continue to have excellent online resources with plan options, plan comparisons, and updated Frequently Asked Questions.
Dafny: Before the Open Enrollment period, the Benefits Office sends all employees a guide or brochure outlining the different plans, the rates for the next year, and additional information. For more of the fine print, you can go online and access it there readily.
You can always reach out directly to the staff at the Benefits Office as well, at any time. They are very knowledgeable and very responsive in answering any questions.
Open Enrollment this year will be from Oct. 28-Nov. 6. Watch your email in early October for a link to your guide, or visit the website for additional information.
Most health policy experts don’t think new Medicaid work requirements introduced in the One Big Beautiful Bill Act (OBBBA) would substantially increase employment among Medicaid-enrolled, working-age adults, according to a new survey from the Cornell Health Policy Insight Panel.
Most health policy experts don’t think new Medicaid work requirements introduced in the One Big Beautiful Bill Act (OBBBA) would substantially increase employment among Medicaid-enrolled, working-age adults, according to a new survey from the Cornell Health Policy Insight Panel.
The Center for International Studies (CIS) empowers students, faculty, and scholars to bring MIT’s interdisciplinary style of research and scholarship to address complex global challenges. In this Q&A, Mihaela Papa, the center's director of research and a principal research scientist at MIT, describes her role and her founding of the BRICS Lab, which studies how the BRICS group of major powers and emerging markets — comprising Brazil, Russia, India, China and South Africa, along with new mem
The Center for International Studies (CIS) empowers students, faculty, and scholars to bring MIT’s interdisciplinary style of research and scholarship to address complex global challenges.
In this Q&A, Mihaela Papa, the center's director of research and a principal research scientist at MIT, describes her role and her founding of the BRICS Lab, which studies how the BRICS group of major powers and emerging markets — comprising Brazil, Russia, India, China and South Africa, along with new members and partners — seeks to reform global policymaking. She also discusses the ongoing mission of CIS to tackle the world's most complex challenges in new and creative ways.
Q: What is your role at CIS, and some of your key accomplishments since joining the center just over a year ago?
A: I serve as director of research and principal research scientist at CIS, a role that bridges management and scholarship. I oversee grant and fellowship programs, spearhead new research initiatives, build research communities across our center's area programs and MIT schools, and mentor the next generation of scholars. My academic expertise is in international relations, and I publish on global governance and sustainable development, particularly through my new BRICS Lab.
This past year, I focused on building collaborative platforms that highlight CIS’ role as an interdisciplinary hub and expand its research reach. With Evan Lieberman, the director of CIS, I launched the CIS Global Research and Policy Seminar series to address current challenges in global development and governance, foster cross-disciplinary dialogue, and connect theoretical insights to policy solutions. We also convened a Climate Adaptation Workshop, which examined promising strategies for financing adaptation and advancing policy innovation. We documented the outcomes in a workshop report that outlines a broader research agenda contributing to MIT’s larger climate mission.
In parallel, I have been reviewing CIS’ grant-making programs to improve how we serve our community, while also supporting regional initiatives such as research planning related to Ukraine. Together with the center's MIT-Brazil faculty director Brad Olsen, I secured a MITHIC [MIT Human Insight Collaboration] Connectivity grant to build an MIT Amazonia research community that connects MIT scholars with regional partners and strengthens collaboration across the Amazon. Finally, I launched the BRICS Lab to analyze transformations in global governance and have ongoing research on BRICS and food security and data centers in BRICS.
Q: Tell us more about the BRICS Lab.
A: The BRICS countries comprise the majority of the world’s population and an expanding share of the global economy. [Originally comprising Brazil, Russia, India, and China, BRICS currently includes 10 members (Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Indonesia, Iran and the United Arab Emirates) and 10 partner countries.] As a group, they carry the collective weight to shape international rules, influence global markets, and redefine norms — yet the question remains: Will they use this power effectively? The BRICS Lab explores the implications of the bloc’s rise for international cooperation and its role in reshaping global politics. Our work focuses on three areas: the design and strategic use of informal groups like BRICS in world affairs; the coalition’s potential to address major challenges such as food security, climate change, and artificial intelligence; and the implications of U.S. policy toward BRICS for the future of multilateralism.
Q: What are the center’s biggest research priorities right now?
A: Our center was founded in response to rising geopolitical tensions and the urgent need for policy rooted in rigorous, evidence-based research. Since then, we have grown into a hub that combines interdisciplinary scholarship and actively engages with policymakers and the public. Today, as in our early years, the center brings together exceptional researchers with the ambition to address the world’s most pressing challenges in new and creative ways.
Our core focus spans security, development, and human dignity. Security studies have been a priority for the center, and our new nuclear security programming advances this work while training the next generation of scholars in this critical field. On the development front, our work has explored how societies manage diverse populations, navigate international migration, as well as engage with human rights and the changing patterns of regime dynamics.
We are pursuing new research in three areas. First, on climate change, we seek to understand how societies confront environmental risks and harms, from insurance to water and food security in the international context. Second, we examine shifting patterns of global governance as rising powers set new agendas and take on greater responsibilities in the international system. Finally, we are initiating research on the impact of AI — how it reshapes governance across international relations, what is the role of AI corporations, and how AI-related risks can be managed.
As we approach our 75th anniversary in 2026, we are excited to bring researchers together to spark bold ideas that open new possibilities for the future.
“Our center was founded in response to rising geopolitical tensions and the urgent need for policy rooted in rigorous, evidence-based research,” says Mihaela Papa. “Today, as in our early years, the center brings together exceptional researchers with the ambition to address the world’s most pressing challenges in new and creative ways.”
A Saab 340 aircraft recently became a permanent fixture of the fleet at the MIT Lincoln Laboratory Flight Test Facility, which supports R&D programs across the lab. Over the past five years, the facility leased and operated the twin-engine turboprop, once commercially used for the regional transport of passengers and cargo. During this time, staff modified the aircraft with a suite of radar, sensing, and communications capabilities. Transitioning the aircraft from a leased to a government-ow
A Saab 340 aircraft recently became a permanent fixture of the fleet at the MIT Lincoln Laboratory Flight Test Facility, which supports R&D programs across the lab.
Over the past five years, the facility leased and operated the twin-engine turboprop, once commercially used for the regional transport of passengers and cargo. During this time, staff modified the aircraft with a suite of radar, sensing, and communications capabilities. Transitioning the aircraft from a leased to a government-owned asset retains the aircraft's capabilities for present and future R&D in support of national security and reduces costs for Lincoln Laboratory sponsors.
With the acquisition of the Saab, the Flight Test Facility currently maintains five government-owned aircraft — including three Gulfstream IVs and a Cessna 206 — as well as a leased Twin Otter, all housed on Hanscom Air Force Base, just over a mile from the laboratory's main campus.
"Of all our aircraft, the Saab is the most multi-mission-capable," says David Culbertson, manager of the Flight Test Facility. "It's highly versatile and adaptable, like a Swiss Army knife. Researchers from across the laboratory have conducted flight tests on the Saab to develop all kinds of technologies for national security."
For example, the Saab was modified to host the Airborne Radar Testbed (ARTB), a high-performance radar system based on a computer-controlled array of antennas that can be electronically steered (instead of physically moved) in different directions. With the ARTB, researchers have matured innovative radio-frequency technology; prototyped advanced system concepts; and demonstrated concepts of operation for intelligence, surveillance, and reconnaissance (ISR) missions. With its open-architecture design and compliance with open standards, the ARTB can easily be reconfigured to suit specific R&D needs.
"The Saab has enabled us to rapidly prototype and mature the complex system-of-systems solutions needed to realize critical warfighter capabilities," says Ramu Bhagavatula, an assistant leader of the laboratory's Embedded and Open Systems Group. "Recently, the Saab participated in a major national exercise as a surrogate multi-INT [intelligence] ISR platform. We demonstrated machine-to-machine cueing of our multi-INT payload to automatically recognize targets designated by an operational U.S. Air Force platform. The Saab's flexibility was key to integrating diverse technologies to develop this important capability."
In anticipation of the expiration of the Saab's lease, the Flight Test Facility and Financial Services Department conducted an extensive analysis of alternatives. Comparing the operational effectiveness, suitability, and life-cycle cost of various options, this analysis determined that the optimal solution for the laboratory and the government was to purchase the aircraft.
"Having the Saab in our permanent inventory allows research groups from across the laboratory to continuously leverage each other's test beds and expertise," says Linda McCabe, a project manager in the laboratory's Communication Networks and Analysis Group. "In addition, we can invest in long-term infrastructure updates that will benefit a wide range of users. For instance, my group helped obtain authorizations from various agencies to equip the Saab with Link 16, a secure communications network used by NATO and its allies to share tactical information."
The Saab acquisition is part of a larger recapitalization effort at the Flight Test Facility to support emerging technology development for years to come. This 10-year effort, slated for completion in 2026, is retiring aging, obsolete aircraft and replacing them with newer platforms that will be more cost-effective to maintain, easier to integrate rapidly prototyped systems into, and able to operate under expanded flight envelopes (the performance limits within which an aircraft can safely fly, defined by parameters such as speed, altitude, and maneuverability).
The successful test of SpaceX’s Starship launch vehicle, following a series of engineering challenges and failed launches, has reignited excitement over the possibilities this massive rocket may unlock for humanity’s greatest ambitions in space. The largest rocket ever built, Starship and its 33-engine “super heavy” booster completed a full launch into Earth orbit on Aug. 26, deployed eight test prototype satellites, and survived reentry for a simulated landing before coming down, mostly intact,
The successful test of SpaceX’s Starship launch vehicle, following a series of engineering challenges and failed launches, has reignited excitement over the possibilities this massive rocket may unlock for humanity’s greatest ambitions in space. The largest rocket ever built, Starship and its 33-engine “super heavy” booster completed a full launch into Earth orbit on Aug. 26, deployed eight test prototype satellites, and survived reentry for a simulated landing before coming down, mostly intact, in the Indian Ocean. The 400-foot rocket is designed to carry up to 150 tons of cargo to low Earth orbit, dramatically increasing potential payload volume from rockets currently in operation. In addition to the planned Artemis III mission to the lunar surface and proposed missions to Mars in the near future, Starship also poses an opportunity for large-scale scientific missions throughout the solar system.
The National Academy of Sciences Planetary Science Decadal Survey published a recommendation in 2022 outlining exploration of Uranus as its highest-priority flagship mission. This proposed mission was envisioned for the 2030s, assuming use of a Falcon Heavy expendable rocket and anticipating arrival at the planet before 2050. Earlier this summer, a paper from researchers in MIT’s Engineering Systems Lab found that Starship may enable this flagship mission to Uranus in half the flight time.
In this 3Q, Chloe Gentgen, a PhD student in aeronautics and astronautics and co-author on the recent study, describes the significance of Uranus as a flagship mission and what the current trajectory of Starship means for scientific exploration.
Q: Why has Uranus been identified as the highest-priority flagship mission?
A: Uranus is one of the most intriguing and least-explored planets in our solar system. The planet is tilted on its side, is extremely cold, presents a highly dynamic atmosphere with fast winds, and has an unusual and complex magnetic field. A few of Uranus’ many moons could be ocean worlds, making them potential candidates in the search for life in the solar system. The ice giants Uranus and Neptune also represent the closest match to most of the exoplanets discovered. A mission to Uranus would therefore radically transform our understanding of ice giants, the solar system, and exoplanets.
What we know about Uranus largely dates back to Voyager 2’s brief flyby nearly 40 years ago. No spacecraft has visited Uranus or Neptune since, making them the only planets yet to have a dedicated orbital mission. One of the main obstacles has been the sheer distance. Uranus is 19 times farther from the sun than the Earth is, and nearly twice as far as Saturn. Reaching it requires a heavy-lift launch vehicle and trajectories involving gravity assists from other planets.
Today, such heavy-lift launch vehicles are available, and trajectories have been identified for launch windows throughout the 2030s, which resulted in selecting a Uranus mission as the highest priority flagship in the 2022 decadal survey. The proposed concept, called Uranus Orbiter and Probe (UOP), would release a probe into the planet’s atmosphere and then embark on a multiyear tour of the system to study the planet’s interior, atmosphere, magnetosphere, rings, and moons.
Q: How do you envision your work on the Starship launch vehicle being deployed for further development?
A: Our study assessed the feasibility and potential benefits of launching a mission to Uranus with a Starship refueled in Earth’s orbit, instead of a Falcon Heavy (another SpaceX launch vehicle, currently operational). The Uranus decadal study showed that launching on a Falcon Heavy Expendable results in a cruise time of at least 13 years. Long cruise times present challenges, such as loss of team expertise and a higher operational budget. With the mission not yet underway, we saw an opportunity to evaluate launch vehicles currently in development, particularly Starship.
When refueled in orbit, Starship could launch a spacecraft directly to Uranus, without detours by other planets for gravity-assist maneuvers. The proposed spacecraft could then arrive at Uranus in just over six years, less than half the time currently envisioned. These high-energy trajectories require significant deceleration at Uranus to capture in orbit. If the spacecraft slows down propulsively, the burn would require 5 km/s of delta v (which quantifies the energy needed for the maneuver), much higher than is typically performed by spacecraft, which might result in a very complex design. A more conservative approach, assuming a maximum burn of 2 km/s at Uranus, would result in a cruise time of 8.5 years.
An alternative to propulsive orbit insertion at Uranus is aerocapture, where the spacecraft, enclosed in a thermally protective aeroshell, dips into the planet’s atmosphere and uses aerodynamic drag to decelerate. We examined whether Starship itself could perform aerocapture, rather than being separated from the spacecraft shortly after launch. Starship is already designed to withstand atmospheric entry at Earth and Mars, and thus already has a thermal protection system that could, potentially, be modified for aerocapture at Uranus. While bringing a Starship vehicle all the way to Uranus presents significant challenges, our analysis showed that aerocapture with Starship would produce deceleration and heating loads similar to those of other Uranus aerocapture concepts and would enable a cruise time of six years.
In addition to launching the proposed spacecraft on a faster trajectory that would reach Uranus sooner, Starship’s capabilities could also be leveraged to deploy larger masses to Uranus, enabling an enhanced mission with additional instruments or probes.
Q: What does the recent successful test of Starship tell us about the viability and timeline for a potential mission to the outer solar system?
A: The latest Starship launch marked an important milestone for the company after three failed launches in recent months, renewing optimism about the rocket’s future capabilities. Looking ahead, the program will need to demonstrate on-orbit refueling, a capability central to both SpaceX’s long-term vision of deep-space exploration and this proposed mission.
Launch vehicle selection for flagship missions typically occurs approximately two years after the official mission formulation process begins, which has not yet commenced for the Uranus mission. As such, Starship still has a few more years to demonstrate its on-orbit refueling architecture before a decision has to be made.
Overall, Starship is still under development, and significant uncertainty remains about its performance, timelines, and costs. Even so, our initial findings paint a promising picture of the benefits that could be realized by using Starship for a flagship mission to Uranus.
A zoomed-in image of Uranus, captured by the James Webb Space Telescope’s Near-Infrared Camera (NIRCam) in 2023, reveals stunning views of the planet’s rings. On the right side of the planet there’s an area of brightening at the pole facing the sun, known as a polar cap. Uranus is the only planet in the solar system tilted on its side, which causes its extreme seasons.
The Center for International Studies (CIS) empowers students, faculty, and scholars to bring MIT’s interdisciplinary style of research and scholarship to address complex global challenges. In this Q&A, Mihaela Papa, the center's director of research and a principal research scientist at MIT, describes her role and her founding of the BRICS Lab, which studies how the BRICS group of major powers and emerging markets — comprising Brazil, Russia, India, China and South Africa, along with new mem
The Center for International Studies (CIS) empowers students, faculty, and scholars to bring MIT’s interdisciplinary style of research and scholarship to address complex global challenges.
In this Q&A, Mihaela Papa, the center's director of research and a principal research scientist at MIT, describes her role and her founding of the BRICS Lab, which studies how the BRICS group of major powers and emerging markets — comprising Brazil, Russia, India, China and South Africa, along with new members and partners — seeks to reform global policymaking. She also discusses the ongoing mission of CIS to tackle the world's most complex challenges in new and creative ways.
Q: What is your role at CIS, and some of your key accomplishments since joining the center just over a year ago?
A: I serve as director of research and principal research scientist at CIS, a role that bridges management and scholarship. I oversee grant and fellowship programs, spearhead new research initiatives, build research communities across our center's area programs and MIT schools, and mentor the next generation of scholars. My academic expertise is in international relations, and I publish on global governance and sustainable development, particularly through my new BRICS Lab.
This past year, I focused on building collaborative platforms that highlight CIS’ role as an interdisciplinary hub and expand its research reach. With Evan Lieberman, the director of CIS, I launched the CIS Global Research and Policy Seminar series to address current challenges in global development and governance, foster cross-disciplinary dialogue, and connect theoretical insights to policy solutions. We also convened a Climate Adaptation Workshop, which examined promising strategies for financing adaptation and advancing policy innovation. We documented the outcomes in a workshop report that outlines a broader research agenda contributing to MIT’s larger climate mission.
In parallel, I have been reviewing CIS’ grant-making programs to improve how we serve our community, while also supporting regional initiatives such as research planning related to Ukraine. Together with the center's MIT-Brazil faculty director Brad Olsen, I secured a MITHIC [MIT Human Insight Collaboration] Connectivity grant to build an MIT Amazonia research community that connects MIT scholars with regional partners and strengthens collaboration across the Amazon. Finally, I launched the BRICS Lab to analyze transformations in global governance and have ongoing research on BRICS and food security and data centers in BRICS.
Q: Tell us more about the BRICS Lab.
A: The BRICS countries comprise the majority of the world’s population and an expanding share of the global economy. [Originally comprising Brazil, Russia, India, and China, BRICS currently includes 10 members (Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Indonesia, Iran and the United Arab Emirates) and 10 partner countries.] As a group, they carry the collective weight to shape international rules, influence global markets, and redefine norms — yet the question remains: Will they use this power effectively? The BRICS Lab explores the implications of the bloc’s rise for international cooperation and its role in reshaping global politics. Our work focuses on three areas: the design and strategic use of informal groups like BRICS in world affairs; the coalition’s potential to address major challenges such as food security, climate change, and artificial intelligence; and the implications of U.S. policy toward BRICS for the future of multilateralism.
Q: What are the center’s biggest research priorities right now?
A: Our center was founded in response to rising geopolitical tensions and the urgent need for policy rooted in rigorous, evidence-based research. Since then, we have grown into a hub that combines interdisciplinary scholarship and actively engages with policymakers and the public. Today, as in our early years, the center brings together exceptional researchers with the ambition to address the world’s most pressing challenges in new and creative ways.
Our core focus spans security, development, and human dignity. Security studies have been a priority for the center, and our new nuclear security programming advances this work while training the next generation of scholars in this critical field. On the development front, our work has explored how societies manage diverse populations, navigate international migration, as well as engage with human rights and the changing patterns of regime dynamics.
We are pursuing new research in three areas. First, on climate change, we seek to understand how societies confront environmental risks and harms, from insurance to water and food security in the international context. Second, we examine shifting patterns of global governance as rising powers set new agendas and take on greater responsibilities in the international system. Finally, we are initiating research on the impact of AI — how it reshapes governance across international relations, what is the role of AI corporations, and how AI-related risks can be managed.
As we approach our 75th anniversary in 2026, we are excited to bring researchers together to spark bold ideas that open new possibilities for the future.
“Our center was founded in response to rising geopolitical tensions and the urgent need for policy rooted in rigorous, evidence-based research,” says Mihaela Papa. “Today, as in our early years, the center brings together exceptional researchers with the ambition to address the world’s most pressing challenges in new and creative ways.”
A Saab 340 aircraft recently became a permanent fixture of the fleet at the MIT Lincoln Laboratory Flight Test Facility, which supports R&D programs across the lab. Over the past five years, the facility leased and operated the twin-engine turboprop, once commercially used for the regional transport of passengers and cargo. During this time, staff modified the aircraft with a suite of radar, sensing, and communications capabilities. Transitioning the aircraft from a leased to a government-ow
A Saab 340 aircraft recently became a permanent fixture of the fleet at the MIT Lincoln Laboratory Flight Test Facility, which supports R&D programs across the lab.
Over the past five years, the facility leased and operated the twin-engine turboprop, once commercially used for the regional transport of passengers and cargo. During this time, staff modified the aircraft with a suite of radar, sensing, and communications capabilities. Transitioning the aircraft from a leased to a government-owned asset retains the aircraft's capabilities for present and future R&D in support of national security and reduces costs for Lincoln Laboratory sponsors.
With the acquisition of the Saab, the Flight Test Facility currently maintains five government-owned aircraft — including three Gulfstream IVs and a Cessna 206 — as well as a leased Twin Otter, all housed on Hanscom Air Force Base, just over a mile from the laboratory's main campus.
"Of all our aircraft, the Saab is the most multi-mission-capable," says David Culbertson, manager of the Flight Test Facility. "It's highly versatile and adaptable, like a Swiss Army knife. Researchers from across the laboratory have conducted flight tests on the Saab to develop all kinds of technologies for national security."
For example, the Saab was modified to host the Airborne Radar Testbed (ARTB), a high-performance radar system based on a computer-controlled array of antennas that can be electronically steered (instead of physically moved) in different directions. With the ARTB, researchers have matured innovative radio-frequency technology; prototyped advanced system concepts; and demonstrated concepts of operation for intelligence, surveillance, and reconnaissance (ISR) missions. With its open-architecture design and compliance with open standards, the ARTB can easily be reconfigured to suit specific R&D needs.
"The Saab has enabled us to rapidly prototype and mature the complex system-of-systems solutions needed to realize critical warfighter capabilities," says Ramu Bhagavatula, an assistant leader of the laboratory's Embedded and Open Systems Group. "Recently, the Saab participated in a major national exercise as a surrogate multi-INT [intelligence] ISR platform. We demonstrated machine-to-machine cueing of our multi-INT payload to automatically recognize targets designated by an operational U.S. Air Force platform. The Saab's flexibility was key to integrating diverse technologies to develop this important capability."
In anticipation of the expiration of the Saab's lease, the Flight Test Facility and Financial Services Department conducted an extensive analysis of alternatives. Comparing the operational effectiveness, suitability, and life-cycle cost of various options, this analysis determined that the optimal solution for the laboratory and the government was to purchase the aircraft.
"Having the Saab in our permanent inventory allows research groups from across the laboratory to continuously leverage each other's test beds and expertise," says Linda McCabe, a project manager in the laboratory's Communication Networks and Analysis Group. "In addition, we can invest in long-term infrastructure updates that will benefit a wide range of users. For instance, my group helped obtain authorizations from various agencies to equip the Saab with Link 16, a secure communications network used by NATO and its allies to share tactical information."
The Saab acquisition is part of a larger recapitalization effort at the Flight Test Facility to support emerging technology development for years to come. This 10-year effort, slated for completion in 2026, is retiring aging, obsolete aircraft and replacing them with newer platforms that will be more cost-effective to maintain, easier to integrate rapidly prototyped systems into, and able to operate under expanded flight envelopes (the performance limits within which an aircraft can safely fly, defined by parameters such as speed, altitude, and maneuverability).
The successful test of SpaceX’s Starship launch vehicle, following a series of engineering challenges and failed launches, has reignited excitement over the possibilities this massive rocket may unlock for humanity’s greatest ambitions in space. The largest rocket ever built, Starship and its 33-engine “super heavy” booster completed a full launch into Earth orbit on Aug. 26, deployed eight test prototype satellites, and survived reentry for a simulated landing before coming down, mostly intact,
The successful test of SpaceX’s Starship launch vehicle, following a series of engineering challenges and failed launches, has reignited excitement over the possibilities this massive rocket may unlock for humanity’s greatest ambitions in space. The largest rocket ever built, Starship and its 33-engine “super heavy” booster completed a full launch into Earth orbit on Aug. 26, deployed eight test prototype satellites, and survived reentry for a simulated landing before coming down, mostly intact, in the Indian Ocean. The 400-foot rocket is designed to carry up to 150 tons of cargo to low Earth orbit, dramatically increasing potential payload volume from rockets currently in operation. In addition to the planned Artemis III mission to the lunar surface and proposed missions to Mars in the near future, Starship also poses an opportunity for large-scale scientific missions throughout the solar system.
The National Academy of Sciences Planetary Science Decadal Survey published a recommendation in 2022 outlining exploration of Uranus as its highest-priority flagship mission. This proposed mission was envisioned for the 2030s, assuming use of a Falcon Heavy expendable rocket and anticipating arrival at the planet before 2050. Earlier this summer, a paper from researchers in MIT’s Engineering Systems Lab found that Starship may enable this flagship mission to Uranus in half the flight time.
In this 3Q, Chloe Gentgen, a PhD student in aeronautics and astronautics and co-author on the recent study, describes the significance of Uranus as a flagship mission and what the current trajectory of Starship means for scientific exploration.
Q: Why has Uranus been identified as the highest-priority flagship mission?
A: Uranus is one of the most intriguing and least-explored planets in our solar system. The planet is tilted on its side, is extremely cold, presents a highly dynamic atmosphere with fast winds, and has an unusual and complex magnetic field. A few of Uranus’ many moons could be ocean worlds, making them potential candidates in the search for life in the solar system. The ice giants Uranus and Neptune also represent the closest match to most of the exoplanets discovered. A mission to Uranus would therefore radically transform our understanding of ice giants, the solar system, and exoplanets.
What we know about Uranus largely dates back to Voyager 2’s brief flyby nearly 40 years ago. No spacecraft has visited Uranus or Neptune since, making them the only planets yet to have a dedicated orbital mission. One of the main obstacles has been the sheer distance. Uranus is 19 times farther from the sun than the Earth is, and nearly twice as far as Saturn. Reaching it requires a heavy-lift launch vehicle and trajectories involving gravity assists from other planets.
Today, such heavy-lift launch vehicles are available, and trajectories have been identified for launch windows throughout the 2030s, which resulted in selecting a Uranus mission as the highest priority flagship in the 2022 decadal survey. The proposed concept, called Uranus Orbiter and Probe (UOP), would release a probe into the planet’s atmosphere and then embark on a multiyear tour of the system to study the planet’s interior, atmosphere, magnetosphere, rings, and moons.
Q: How do you envision your work on the Starship launch vehicle being deployed for further development?
A: Our study assessed the feasibility and potential benefits of launching a mission to Uranus with a Starship refueled in Earth’s orbit, instead of a Falcon Heavy (another SpaceX launch vehicle, currently operational). The Uranus decadal study showed that launching on a Falcon Heavy Expendable results in a cruise time of at least 13 years. Long cruise times present challenges, such as loss of team expertise and a higher operational budget. With the mission not yet underway, we saw an opportunity to evaluate launch vehicles currently in development, particularly Starship.
When refueled in orbit, Starship could launch a spacecraft directly to Uranus, without detours by other planets for gravity-assist maneuvers. The proposed spacecraft could then arrive at Uranus in just over six years, less than half the time currently envisioned. These high-energy trajectories require significant deceleration at Uranus to capture in orbit. If the spacecraft slows down propulsively, the burn would require 5 km/s of delta v (which quantifies the energy needed for the maneuver), much higher than is typically performed by spacecraft, which might result in a very complex design. A more conservative approach, assuming a maximum burn of 2 km/s at Uranus, would result in a cruise time of 8.5 years.
An alternative to propulsive orbit insertion at Uranus is aerocapture, where the spacecraft, enclosed in a thermally protective aeroshell, dips into the planet’s atmosphere and uses aerodynamic drag to decelerate. We examined whether Starship itself could perform aerocapture, rather than being separated from the spacecraft shortly after launch. Starship is already designed to withstand atmospheric entry at Earth and Mars, and thus already has a thermal protection system that could, potentially, be modified for aerocapture at Uranus. While bringing a Starship vehicle all the way to Uranus presents significant challenges, our analysis showed that aerocapture with Starship would produce deceleration and heating loads similar to those of other Uranus aerocapture concepts and would enable a cruise time of six years.
In addition to launching the proposed spacecraft on a faster trajectory that would reach Uranus sooner, Starship’s capabilities could also be leveraged to deploy larger masses to Uranus, enabling an enhanced mission with additional instruments or probes.
Q: What does the recent successful test of Starship tell us about the viability and timeline for a potential mission to the outer solar system?
A: The latest Starship launch marked an important milestone for the company after three failed launches in recent months, renewing optimism about the rocket’s future capabilities. Looking ahead, the program will need to demonstrate on-orbit refueling, a capability central to both SpaceX’s long-term vision of deep-space exploration and this proposed mission.
Launch vehicle selection for flagship missions typically occurs approximately two years after the official mission formulation process begins, which has not yet commenced for the Uranus mission. As such, Starship still has a few more years to demonstrate its on-orbit refueling architecture before a decision has to be made.
Overall, Starship is still under development, and significant uncertainty remains about its performance, timelines, and costs. Even so, our initial findings paint a promising picture of the benefits that could be realized by using Starship for a flagship mission to Uranus.
A zoomed-in image of Uranus, captured by the James Webb Space Telescope’s Near-Infrared Camera (NIRCam) in 2023, reveals stunning views of the planet’s rings. On the right side of the planet there’s an area of brightening at the pole facing the sun, known as a polar cap. Uranus is the only planet in the solar system tilted on its side, which causes its extreme seasons.
The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times
The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.
MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times the light-collecting area and up to 200 times the power of existing observatories.
“As philanthropists, Terry and Susan have an unerring instinct for finding the big levers: those interventions that truly transform the scientific landscape,” says MIT President Sally Kornbluth. “We saw this with their founding of the Ragon Institute, which pursues daring approaches to harnessing the immune system to prevent and cure human diseases. With today’s landmark gift, the Ragons enable an equally lofty mission to better understand the universe — and we could not be more grateful for their visionary support."
MIT will be the 16th member of the international consortium advancing the Giant Magellan Telescope and the 10th participant based in the United States. Together, the consortium has invested $1 billion in the observatory — the largest-ever private investment in ground-based astronomy. The Giant Magellan Telescope is already 40 percent under construction, with major components being designed and manufactured across 36 U.S. states.
“MIT is honored to join the consortium and participate in this exceptional scientific endeavor,” says Ian A. Waitz, MIT’s vice president for research. “The Giant Magellan Telescope will bring tremendous new capabilities to MIT astronomy and to U.S. leadership in fundamental science. The construction of this uniquely powerful telescope represents a vital private and public investment in scientific excellence for decades to come.”
MIT brings to the consortium powerful scientific capabilities and a legacy of astronomical excellence. MIT’s departments of Physics and of Earth, Atmospheric and Planetary Sciences, and the MIT Kavli Institute for Astrophysics and Space Research, are internationally recognized for research in exoplanets, cosmology, and environments of extreme gravity, such as black holes and compact binary stars. MIT’s involvement will strengthen the Giant Magellan Telescope’s unique capabilities in high-resolution spectroscopy, adaptive optics, and the search for life beyond Earth. It also deepens a long-standing scientific relationship: MIT is already a partner in the existing twin Magellan Telescopes at Las Campanas Observatory in Chile — one of the most scientifically valuable observing sites on Earth, and the same site where the Giant Magellan Telescope is now under construction.
“Since Galileo’s first spyglass, the world’s largest telescope has doubled in aperture every 40 to 50 years,” says Robert A. Simcoe, director of the MIT Kavli Institute and the Francis L. Friedman Professor of Physics. “Each generation’s leading instruments have resolved important scientific questions of the day and then surprised their builders with new discoveries not yet even imagined, helping humans understand our place in the universe. Together with the Giant Magellan Telescope, MIT is helping to realize our generation’s contribution to this lineage, consistent with our mission to advance the frontier of fundamental science by undertaking the most audacious and advanced engineering challenges.”
Contributing to the national strategy
MIT’s support comes at a pivotal time for the observatory. In June 2025, the National Science Foundation (NSF) advanced the Giant Magellan Telescope into its Final Design Phase, one of the final steps before it becomes eligible for federal construction funding. To demonstrate readiness and a strong commitment to U.S. leadership, the consortium offered to privately fund this phase, which is traditionally supported by the NSF.
MIT’s investment is an integral part of the national strategy to secure U.S. access to the next generation of research facilities known as “extremely large telescopes.” The Giant Magellan Telescope is a core partner in the U.S. Extremely Large Telescope Program, the nation’s top priority in astronomy. The National Academies’ Astro2020 Decadal Survey called the program “absolutely essential if the United States is to maintain a position as a leader in ground-based astronomy.” This long-term strategy also includes the recently commissioned Vera C. Rubin Observatory in Chile. Rubin is scanning the sky to detect rare, fast-changing cosmic events, while the Giant Magellan Telescope will provide the sensitivity, resolution, and spectroscopic instruments needed to study them in detail. Together, these Southern Hemisphere observatories will give U.S. scientists the tools they need to lead 21st-century astrophysics.
“Without direct access to the Giant Magellan Telescope, the U.S. risks falling behind in fundamental astronomy, as Rubin’s most transformational discoveries will be utilized by other nations with access to their own ‘extremely large telescopes’ under development,” says Walter Massey, board chair of the Giant Magellan Telescope.
MIT’s participation brings the United States a step closer to completing the promise of this powerful new observatory on a globally competitive timeline. With federal construction funding, it is expected that the observatory could reach 90 percent completion in less than two years and become operational by the 2030s.
“MIT brings critical expertise and momentum at a time when global leadership in astronomy hangs in the balance,” says Robert Shelton, president of the Giant Magellan Telescope. “With MIT, we are not just adding a partner; we are accelerating a shared vision for the future and reinforcing the United States’ position at the forefront of science.”
Other members of the Giant Magellan Telescope consortium include the University of Arizona, Carnegie Institution for Science, The University of Texas at Austin, Korea Astronomy and Space Science Institute, University of Chicago, São Paulo Research Foundation (FAPESP), Texas A&M University, Northwestern University, Harvard University, Astronomy Australia Ltd., Australian National University, Smithsonian Institution, Weizmann Institute of Science, Academia Sinica Institute of Astronomy and Astrophysics, and Arizona State University.
A boon for astrophysics research and education
Access to the world’s best optical telescopes is a critical resource for MIT researchers. More than 150 individual science programs at MIT have relied on major astronomical observatories in the past three years, engaging faculty, researchers, and students in investigations into the marvels of the universe. Recent research projects have included chemical studies of the universe’s oldest stars, led by Professor Anna Frebel; spectroscopy of stars shredded by dormant black holes, led by Professor Erin Kara; and measurements of a white dwarf teetering on the precipice of a black hole, led by Professor Kevin Burdge.
“Over many decades, researchers at the MIT Kavli Institute have used unparalleled instruments to discover previously undetected cosmic phenomena from both ground-based observations and spaceflight missions,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics. “I have no doubt our brilliant colleagues will carry on that tradition with the Giant Magellan Telescope, and I can’t wait to see what they will discover next.”
The Giant Magellan Telescope will also provide a platform for advanced R&D in remote sensing, creating opportunities to build custom infrared and optical spectrometers and high-speed imagers to further study our universe.
“One cannot have a leading physics program without a leading astrophysics program. Access to time on the Giant Magellan Telescope will ensure that future generations of MIT researchers will continue to work at the forefront of astrophysical discovery for decades to come,” says Deepto Chakrabarty, head of the MIT Department of Physics, the William A. M. Burden Professor in Astrophysics, and principal investigator at the MIT Kavli Institute. “Our institutional access will help attract and retain top researchers in astrophysics, planetary science, and advanced optics, and will give our PhD students and postdocs unrivaled educational opportunities.”
The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times
The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.
MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times the light-collecting area and up to 200 times the power of existing observatories.
“As philanthropists, Terry and Susan have an unerring instinct for finding the big levers: those interventions that truly transform the scientific landscape,” says MIT President Sally Kornbluth. “We saw this with their founding of the Ragon Institute, which pursues daring approaches to harnessing the immune system to prevent and cure human diseases. With today’s landmark gift, the Ragons enable an equally lofty mission to better understand the universe — and we could not be more grateful for their visionary support."
MIT will be the 16th member of the international consortium advancing the Giant Magellan Telescope and the 10th participant based in the United States. Together, the consortium has invested $1 billion in the observatory — the largest-ever private investment in ground-based astronomy. The Giant Magellan Telescope is already 40 percent under construction, with major components being designed and manufactured across 36 U.S. states.
“MIT is honored to join the consortium and participate in this exceptional scientific endeavor,” says Ian A. Waitz, MIT’s vice president for research. “The Giant Magellan Telescope will bring tremendous new capabilities to MIT astronomy and to U.S. leadership in fundamental science. The construction of this uniquely powerful telescope represents a vital private and public investment in scientific excellence for decades to come.”
MIT brings to the consortium powerful scientific capabilities and a legacy of astronomical excellence. MIT’s departments of Physics and of Earth, Atmospheric and Planetary Sciences, and the MIT Kavli Institute for Astrophysics and Space Research, are internationally recognized for research in exoplanets, cosmology, and environments of extreme gravity, such as black holes and compact binary stars. MIT’s involvement will strengthen the Giant Magellan Telescope’s unique capabilities in high-resolution spectroscopy, adaptive optics, and the search for life beyond Earth. It also deepens a long-standing scientific relationship: MIT is already a partner in the existing twin Magellan Telescopes at Las Campanas Observatory in Chile — one of the most scientifically valuable observing sites on Earth, and the same site where the Giant Magellan Telescope is now under construction.
“Since Galileo’s first spyglass, the world’s largest telescope has doubled in aperture every 40 to 50 years,” says Robert A. Simcoe, director of the MIT Kavli Institute and the Francis L. Friedman Professor of Physics. “Each generation’s leading instruments have resolved important scientific questions of the day and then surprised their builders with new discoveries not yet even imagined, helping humans understand our place in the universe. Together with the Giant Magellan Telescope, MIT is helping to realize our generation’s contribution to this lineage, consistent with our mission to advance the frontier of fundamental science by undertaking the most audacious and advanced engineering challenges.”
Contributing to the national strategy
MIT’s support comes at a pivotal time for the observatory. In June 2025, the National Science Foundation (NSF) advanced the Giant Magellan Telescope into its Final Design Phase, one of the final steps before it becomes eligible for federal construction funding. To demonstrate readiness and a strong commitment to U.S. leadership, the consortium offered to privately fund this phase, which is traditionally supported by the NSF.
MIT’s investment is an integral part of the national strategy to secure U.S. access to the next generation of research facilities known as “extremely large telescopes.” The Giant Magellan Telescope is a core partner in the U.S. Extremely Large Telescope Program, the nation’s top priority in astronomy. The National Academies’ Astro2020 Decadal Survey called the program “absolutely essential if the United States is to maintain a position as a leader in ground-based astronomy.” This long-term strategy also includes the recently commissioned Vera C. Rubin Observatory in Chile. Rubin is scanning the sky to detect rare, fast-changing cosmic events, while the Giant Magellan Telescope will provide the sensitivity, resolution, and spectroscopic instruments needed to study them in detail. Together, these Southern Hemisphere observatories will give U.S. scientists the tools they need to lead 21st-century astrophysics.
“Without direct access to the Giant Magellan Telescope, the U.S. risks falling behind in fundamental astronomy, as Rubin’s most transformational discoveries will be utilized by other nations with access to their own ‘extremely large telescopes’ under development,” says Walter Massey, board chair of the Giant Magellan Telescope.
MIT’s participation brings the United States a step closer to completing the promise of this powerful new observatory on a globally competitive timeline. With federal construction funding, it is expected that the observatory could reach 90 percent completion in less than two years and become operational by the 2030s.
“MIT brings critical expertise and momentum at a time when global leadership in astronomy hangs in the balance,” says Robert Shelton, president of the Giant Magellan Telescope. “With MIT, we are not just adding a partner; we are accelerating a shared vision for the future and reinforcing the United States’ position at the forefront of science.”
Other members of the Giant Magellan Telescope consortium include the University of Arizona, Carnegie Institution for Science, The University of Texas at Austin, Korea Astronomy and Space Science Institute, University of Chicago, São Paulo Research Foundation (FAPESP), Texas A&M University, Northwestern University, Harvard University, Astronomy Australia Ltd., Australian National University, Smithsonian Institution, Weizmann Institute of Science, Academia Sinica Institute of Astronomy and Astrophysics, and Arizona State University.
A boon for astrophysics research and education
Access to the world’s best optical telescopes is a critical resource for MIT researchers. More than 150 individual science programs at MIT have relied on major astronomical observatories in the past three years, engaging faculty, researchers, and students in investigations into the marvels of the universe. Recent research projects have included chemical studies of the universe’s oldest stars, led by Professor Anna Frebel; spectroscopy of stars shredded by dormant black holes, led by Professor Erin Kara; and measurements of a white dwarf teetering on the precipice of a black hole, led by Professor Kevin Burdge.
“Over many decades, researchers at the MIT Kavli Institute have used unparalleled instruments to discover previously undetected cosmic phenomena from both ground-based observations and spaceflight missions,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics. “I have no doubt our brilliant colleagues will carry on that tradition with the Giant Magellan Telescope, and I can’t wait to see what they will discover next.”
The Giant Magellan Telescope will also provide a platform for advanced R&D in remote sensing, creating opportunities to build custom infrared and optical spectrometers and high-speed imagers to further study our universe.
“One cannot have a leading physics program without a leading astrophysics program. Access to time on the Giant Magellan Telescope will ensure that future generations of MIT researchers will continue to work at the forefront of astrophysical discovery for decades to come,” says Deepto Chakrabarty, head of the MIT Department of Physics, the William A. M. Burden Professor in Astrophysics, and principal investigator at the MIT Kavli Institute. “Our institutional access will help attract and retain top researchers in astrophysics, planetary science, and advanced optics, and will give our PhD students and postdocs unrivaled educational opportunities.”
By Prof Lawrence Loh, Director and Ms Ang Hui Min, Senior Manager (Research and Communications); both from the Centre for Governance and Sustainability at NUS Business SchoolThe Business Times, 26 September 2025, p11
By Prof Lawrence Loh, Director and Ms Ang Hui Min, Senior Manager (Research and Communications); both from the Centre for Governance and Sustainability at NUS Business School
In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.The energy demands of generative AI are expected to continue increasing dramatically over the next decade.For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI
The energy demands of generative AI are expected to continue increasing dramatically over the next decade.
For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.
Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.
These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.
Considering carbon emissions
Talk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.
Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)
Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds.
“The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.
Reducing operational carbon emissions
When it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.
“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.
In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.
Another strategy is to use less energy-intensive computing hardware.
Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.
But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.
There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.
Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.
“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.
Researchers can also take advantage of efficiency-boosting measures.
For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.
By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.
Leveraging efficiency improvements
Constant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.
Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.
“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.
Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.
Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.
These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.
“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.
Maximizing energy savings
While reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.
“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.
Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.
Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.
Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.
“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.
He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.
The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.
With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.
“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.
In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.
Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.
Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.
AI-based solutions
Currently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.
The local, state, and federal review processes required for a new renewable energy projects can take years.
Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.
For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.
“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.
For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.
It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.
By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.
To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.
The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.
At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.
“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says.
“We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it,” says Jennifer Turliuk MBA ’25, who is working to help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of generative AI. “This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense.”
Agüera y Arcas (left) and Alex Pascal. Stephanie Mitchell/Harvard Staff Photographer
Nation & World
Artificial intelligence may not be artificial
Researcher traces evolution of computation power of human brains, parallels to AI, argues key to increasing complexity is cooperation
Liz Mineo
Harvard Staff Writer
September 29, 2025
4 min read
The term artificial intelligence renders the
Researcher traces evolution of computation power of human brains, parallels to AI, argues key to increasing complexity is cooperation
Liz Mineo
Harvard Staff Writer
4 min read
The term artificial intelligence renders the sense that what computers do is either inferior to or at least apart from human intelligence. AI researcher Blaise Agüera y Arcas argues that may not be the case.
Agüera y Arcas, Google’s CTO of technology and society, traced the evolution of both human and artificial intelligence in ways that seem to mirror each other as part of a Wednesday event sponsored by Harvard Law School’s Berkman Klein Center for Internet & Society.
“Why has the computational power of brains, not just of AI models, grown explosively throughout evolution?” said Agüera y Arcas, the author of the new book “What Is Intelligence? Lessons from AI About Evolution, Computing, and Minds.” “If we rewind 500 million years, we see only things with very small brains, and if we go back a billion years, we see no brains at all.”
According to Agüera y Arcas, human brains evolved to be computational, meaning that they process information by transforming various kinds of inputs into signals or outputs, and that most of the computation that brains do takes the form of predictions, which is what AI systems do.
“I hear a lot of people say that it’s a metaphor to talk about brains as computers,” said Agüera y Arcas. “I don’t mean this metaphorically. I mean it very literally … The premise of computational neuroscience is that what brains do is process information, not that they are like computers, but that they are computers.”
Agüera y Arcas’ book explores the evolution and social origins of intelligence and develops his insights on what he calls the computational nature of intelligence, biology, and life as a whole.
It draws on ideas from scientists such as Alan Turing and John von Neumann and their theories on self-replication and universal computation, as well as evolutionary biologist Lynn Margulis’ theory of symbiogenesis and Agüera y Arcas’ own research and experiments at Google.
Agüera y Arcas used Margulis’ theory, which suggests that merging different organisms to form more complex entities played a key role in cell evolution, to explain the similarities between the computational aspects of both biology and AI models, which also engage in symbiotic relationships of cooperation and develop greater complexity and intelligence.
Charles Darwin’s evolution theory of random mutation and natural selection is only half the evolution story, Agüera y Arcas said; symbiogenesis, with cooperation as its main feature, is the creative engine behind evolution.
“Life was computational from the start,” said Agüera y Arcas. “It gets more computationally complex over time through symbiogenesis because when you have two computers that come together and start cooperating, now you have a parallel computer, and a massively parallel computation that leads to more and more parallel computation, which is exactly what we see in nervous systems that consist of lots of neurons that are all computing functions in parallel.”
“Life was computational from the start.”
Agüera y Arcas
During his talk, Agüera y Arcas showed the audience a video of experiments he conducted at Google using a programming language to explore the development of complex programs from simple, random initial conditions.
The root programming language used only eight basic instructions, but after a few million interactions among the random bytes more complex programs began to appear because they became self-reproducing — and grew in complexity.
“It was an exploration of how self-reproducing entities can arise out of random initial conditions, which is how life must have arisen, right?” said Agüera y Arcas. “We know that life didn’t always exist in the universe. … There must have been initial conditions that are disordered from which life arises.”
Agüera y Arcas views intelligence as the ability to predict and influence the future and traces the “human intelligence explosion” to the moment when humans formed societies and began cooperating and living together. He argues the growth and evolution of human brains began when they banded together and created collective societies.
The emergence of societies was a major evolutionary transition, he said, citing the work of scientists Eörs Szathmáry and John Maynard Smith.
“Human individuals are not very smart, but when we get together, we can do amazing things, like transplanting organs and going to the moon,” said Agüera y Arcas. “Those are not individual capabilities. No individual human can do that. That’s a collective human intelligence sort of thing, and it comes about through specialization, through theory of mind, through us being able to model each other in order to work in groups.”
Placing a lit candle in a window to welcome friends and strangers is an old Irish tradition that took on greater significance when Mary Robinson was elected president of Ireland in 1990. At the time, Robinson placed a lamp in Áras an Uachtaráin — the official residence of Ireland’s presidents — noting that the Irish diaspora and all others are always welcome in Ireland. Decades later, a lit lamp remains in a window in Áras an Uachtaráin.The symbolism of Robinson’s lamp was shared by Hashim Sarki
Placing a lit candle in a window to welcome friends and strangers is an old Irish tradition that took on greater significance when Mary Robinson was elected president of Ireland in 1990. At the time, Robinson placed a lamp in Áras an Uachtaráin — the official residence of Ireland’s presidents — noting that the Irish diaspora and all others are always welcome in Ireland. Decades later, a lit lamp remains in a window in Áras an Uachtaráin.
The symbolism of Robinson’s lamp was shared by Hashim Sarkis, dean of the MIT School of Architecture and Planning (SA+P), at the school’s graduation ceremony in May, where Robinson addressed the class of 2025. To replicate the generous intentions of Robinson’s lamp and commemorate her visit to MIT, Sarkis commissioned a unique lantern as a gift for Robinson. He commissioned an identical one for his office, which is in the front portico of MIT at 77 Massachusetts Ave.
“The lamp will welcome all citizens of the world to MIT,” says Sarkis.
No ordinary lantern
The bespoke lantern was created by Marcelo Coelho SM ’08, PhD ’12, director of the Design Intelligence Lab and associate professor of the practice in the Department of Architecture.
One of several projects in the Geoletric research at the Design Intelligence Lab, the lantern showcases the use of geopolymers as a sustainable material alternative for embedded computers and consumer electronics.
“The materials that we use to make computers have a negative impact on climate, so we’re rethinking how we make products with embedded electronics — such as a lamp or lantern — from a climate perspective,” says Coelho.
Consumer electronics rely on materials that are high in carbon emissions and difficult to recycle. As the demand for embedded computing increases, so too does the need for alternative materials that have a reduced environmental impact while supporting electronic functionality.
The Geolectric lantern advances the formulation and application of geopolymers — a class of inorganic materials that form covalently bonded, non-crystalline networks. Unlike traditional ceramics, geopolymers do not require high-temperature firing, allowing electronic components to be embedded seamlessly during production.
Geopolymers are similar to ceramics, but have a lower carbon footprint and present a sustainable alternative for consumer electronics, product design, and architecture. The minerals Coelho uses to make the geopolymers — aluminum silicate and sodium silicate — are those regularly used to make ceramics.
“Geopolymers aren’t particularly new, but are becoming more popular,” says Coelho. “They have high strength in both tension and compression, superior durability, fire resistance, and thermal insulation. Compared to concrete, geopolymers don’t release carbon dioxide. Compared to ceramics, you don’t have to worry about firing them. What’s even more interesting is that they can be made from industrial byproducts and waste materials, contributing to a circular economy and reducing waste.”
The lantern is embedded with custom electronics that serve as a proximity and touch sensor. When a hand is placed over the top, light shines down the glass tubes.
The timeless design of the Geoelectric lantern — minimalist, composed of natural materials — belies its future-forward function. Coelho’s academic background is in fine arts and computer science. Much of his work, he says, “bridges these two worlds.”
Working at the Design Intelligence Lab with Coelho on the lanterns are Jacob Payne, a graduate architecture student, and Jean-Baptiste Labrune, a research affiliate.
A light for MIT
A few weeks before commencement, Sarkis saw the Geoelectric lantern in Palazzo Diedo Berggruen Arts and Culture in Venice, Italy. The exhibition, a collateral event of the Venice Biennale’s 19th International Architecture Exhibition, featured the work of 40 MIT architecture faculty.
The sustainability feature of Geolectric is the key reason Sarkis regarded the lantern as the perfect gift for Robinson. After her career in politics, Robinson founded the Mary Robinson Foundation — Climate Justice, an international center addressing the impacts of climate change on marginalized communities.
The third iteration of Geolectric for Sarkis’ office is currently underway. While the lantern was a technical prototype and an opportunity to showcase his lab’s research, Coelho — an immigrant from Brazil — was profoundly touched by how Sarkis created the perfect symbolism to both embody the welcoming spirit of the school and honor President Robinson.
“When the world feels most fragile, we need to urgently find sustainable and resilient solutions for our built environment. It’s in the darkest times when we need light the most,” says Coelho.
A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism
A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.
In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism that has since been buried, transformed, and preserved in sediment, sometimes for hundreds of millions of years.
The newly identified chemical fossils are special types of steranes, which are the geologically stable form of sterols, such as cholesterol, that are found in the cell membranes of complex organisms. The researchers traced these special steranes to a class of sea sponges known as demosponges. Today, demosponges come in a huge variety of sizes and colors, and live throughout the oceans as soft and squishy filter feeders. Their ancient counterparts may have shared similar characteristics.
“We don’t know exactly what these organisms would have looked like back then, but they absolutely would have lived in the ocean, they would have been soft-bodied, and we presume they didn’t have a silica skeleton,” says Roger Summons, the Schlumberger Professor of Geobiology Emeritus in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
The group’s discovery of sponge-specific chemical fossils offers strong evidence that the ancestors of demosponges were among the first animals to evolve, and that they likely did so much earlier than the rest of Earth’s major animal groups.
The study’s authors, including Summons, are lead author and former MIT EAPS Crosby Postdoctoral Fellow Lubna Shawar, who is now a research scientist at Caltech, along with Gordon Love from the University of California at Riverside, Benjamin Uveges of Cornell University, Alex Zumberge of GeoMark Research in Houston, Paco Cárdenas of Uppsala University in Sweden, and José-Luis Giner of the State University of New York College of Environmental Science and Forestry.
Sponges on steroids
The new study builds on findings that the group first reported in 2009. In that study, the team identified the first chemical fossils that appeared to derive from ancient sponges. They analyzed rock samples from an outcrop in Oman and found a surprising abundance of steranes that they determined were the preserved remnants of 30-carbon (C30) sterols — a rare form of steroid that they showed was likely derived from ancient sea sponges.
The steranes were found in rocks that were very old and formed during the Ediacaran Period — which spans from roughly 541 million to about 635 million years ago. This period took place just before the Cambrian, when the Earth experienced a sudden and global explosion of complex multicellular life. The team’s discovery suggested that ancient sponges appeared much earlier than most multicellular life, and were possibly one of Earth’s first animals.
However, soon after these findings were released, alternative hypotheses swirled to explain the C30 steranes’ origins, including that the chemicals could have been generated by other groups of organisms or by nonliving geological processes.
The team says the new study reinforces their earlier hypothesis that ancient sponges left behind this special chemical record, as they have identified a new chemical fossil in the same Precambrian rocks that is almost certainly biological in origin.
Building evidence
Just as in their previous work, the researchers looked for chemical fossils in rocks that date back to the Ediacaran Period. They acquired samples from drill cores and outcrops in Oman, western India, and Siberia, and analyzed the rocks for signatures of steranes, the geologically stable form of sterols found in all eukaryotes (plants, animals, and any organism with a nucleus and membrane-bound organelles).
“You’re not a eukaryote if you don’t have sterols or comparable membrane lipids,” Summons says.
A sterol’s core structure consists of four fused carbon rings. Additional carbon side chain and chemical add-ons can attach to and extend a sterol’s structure, depending on what an organism’s particular genes can produce. In humans, for instance, the sterol cholesterol contains 27 carbon atoms, while the sterols in plants generally have 29 carbon atoms.
“It’s very unusual to find a sterol with 30 carbons,” Shawar says.
The chemical fossil the researchers identified in 2009 was a 30-carbon sterol. What’s more, the team determined that the compound could be synthesized because of the presence of a distinctive enzyme which is encoded by a gene that is common to demosponges.
In their new study, the team focused on the chemistry of these compounds and realized the same sponge-derived gene could produce an even rarer sterol, with 31 carbon atoms (C31). When they analyzed their rock samples for C31 steranes, they found it in surprising abundance, along with the aforementioned C30 steranes.
“These special steranes were there all along,” Shawar says. “It took asking the right questions to seek them out and to really understand their meaning and from where they come.”
The researchers also obtained samples of modern-day demosponges and analyzed them for C31 sterols. They found that, indeed, the sterols — biological precursors of the C31 steranes found in rocks — are present in some species of contemporary demosponges. Going a step further, they chemically synthesized eight different C31 sterols in the lab as reference standards to verify their chemical structures. Then, they processed the molecules in ways that simulate how the sterols would change when deposited, buried, and pressurized over hundreds of millions of years. They found that the products of only two such sterols were an exact match with the form of C31 sterols that they found in ancient rock samples. The presence of two and the absence of the other six demonstrates that these compounds were not produced by a random nonbiological process.
The findings, reinforced by multiple lines of inquiry, strongly support the idea that the steranes that were found in ancient rocks were indeed produced by living organisms, rather than through geological processes. What’s more, those organisms were likely the ancestors of demosponges, which to this day have retained the ability to produce the same series of compounds.
“It’s a combination of what’s in the rock, what’s in the sponge, and what you can make in a chemistry laboratory,” Summons says. “You’ve got three supportive, mutually agreeing lines of evidence, pointing to these sponges being among the earliest animals on Earth.”
“In this study we show how to authenticate a biomarker, verifying that a signal truly comes from life rather than contamination or non-biological chemistry,” Shawar adds.
Now that the team has shown C30 and C31 sterols are reliable signals of ancient sponges, they plan to look for the chemical fossils in ancient rocks from other regions of the world. They can only tell from the rocks they’ve sampled so far that the sediments, and the sponges, formed some time during the Ediacaran Period. With more samples, they will have a chance to narrow in on when some of the first animals took form.
This research was supported, in part, by the MIT Crosby Fund, the Distinguished Postdoctoral Fellowship program, the Simons Foundation Collaboration on the Origins of Life, and the NASA Exobiology Program.
Some of the first animals on Earth were likely ancestors of the modern sea sponge, according to MIT geochemists who unearthed new evidence in very old rocks.
Daniel Gilbert.Veasey Conway/Harvard Staff Photographer
Health
What science says about Mom’s happiness advice
Data, wisdom meet in social psychologist’s lecture
Harry Pierre
DCE Communications
September 29, 2025
3 min read
Daniel Gilbert’s mother gave him some advice many years ago on finding happiness.
Last week the social psychologist broke down what the science says about his mother
Data, wisdom meet in social psychologist’s lecture
Harry Pierre
DCE Communications
3 min read
Daniel Gilbert’s mother gave him some advice many years ago on finding happiness.
Last week the social psychologist broke down what the science says about his mother’s three-pronged formula — marriage, money, and children — during the Division of Continuing Education’s inaugural Dean’s Distinguished Lecture in Sanders Theatre.
“I thought my mom’s recipe for happiness was original,” said the Edgar Pierce Professor of Psychology, “but then I became a scientist and discovered that everybody’s mom had this recipe.”
According to Gilbert, his mother may have been partly right — at least when it comes to marriage. Studies have shown that married people are on average happier than those who are unmarried, and the effect holds across decades of data. But he added, “It isn’t marriage, per se, that makes you happy. It’s the good marriage you have. If a marriage is good enough to keep, you’ll likely get a happiness boost from keeping it. If it isn’t, you’ll likely get one from leaving.”
Turning to income, Gilbert debunked the ancient notion that money and happiness are unrelated. “When people are hungry, cold, or sick, they are not happy,” he said. “Money absolutely makes people happy — because it buys them out of almost every form of human misery.”
However, research shows that the relationship between money and happiness follows a flattening curve: People at lower levels of wealth became very happy the more money they made, but happiness tended to decrease once their finances reached a certain high point. Gilbert cited work by Nobel Prize winners Daniel Kahneman and Angus Deaton, who found that increasing social connections often outweighed massive financial gain. Their 2010 study revealed that the mood boost from spending a day with loved ones was seven times larger than the boost from quadrupling annual income.
“Trading time with people you love for money that won’t do anything for your happiness is a very bad deal,” Gilbert said.
The third ingredient in Gilbert’s mother’s “recipe” — having children — proved to be a more complex data point. While many parents describe their children as their greatest source of joy, Gilbert pointed to data showing that, on average, parents report less happiness while raising kids. The effect is particularly pronounced for young, single mothers, while older, married fathers tend to report the largest boosts.
“Children can be a great source of happiness,” Gilbert said, “but they can also be a great source of stress and hard work. Whether they increase or decrease happiness depends on how those two things balance out.”
For ages happiness was often seen as a matter of luck. Gilbert noted that because of advances in agriculture, industry, and technology, many people today live longer, healthier, and more prosperous lives than their ancestors could have imagined. “For the very first time, human happiness is not just in the hands of fate,” he said. “To a large extent, your happiness is under your control.”
Gilbert shared that he dropped out of high school at 16. He later found his way back to education through continuing studies and never imagined he would one day be on stage at Harvard. “Those opportunities allowed me to imagine a future that was very different than the one I was getting prepared for,” he told the audience.
The lecture, part of the division’s yearlong 50th anniversary celebration, is expected to be an annual signature event hosted by the division to foster community and discussion on thought-provoking topics, said DCE Dean Nancy Coleman.
Arts & Culture
Marking 100 years of Norton Lectures
Sean Kelly (from left), Stephanie Burt, Adam Gopnik, Vijay Iyer, and Viet Thanh Nguyen.Niles Singer/Harvard Staff Photographer
Eileen O’Grady
Harvard Staff Writer
September 29, 2025
5 min read
Panelists reflect on ‘incredible value’ of annual series as ‘megaphone’ for artists and scholars
In November 1926, Oxford classics scholar Gi
Sean Kelly (from left), Stephanie Burt, Adam Gopnik, Vijay Iyer, and Viet Thanh Nguyen.
Niles Singer/Harvard Staff Photographer
Eileen O’Grady
Harvard Staff Writer
5 min read
Panelists reflect on ‘incredible value’ of annual series as ‘megaphone’ for artists and scholars
In November 1926, Oxford classics scholar Gilbert Murray stood before an audience in Harvard’s Lowell Lecture Hall to deliver the first-ever talk in the newly endowed Charles Eliot Norton Professorship of Poetry. His lectures on the classical tradition drew such crowds that according to a Crimson story published at the time, the final one had to be moved to Boston’s Symphony Hall to accommodate the demand.
“We’re now in the 100th year, and this distinguished lecture series has witnessed a century of individuals delivering lectures on literature, music and the visual arts,” director Suzannah Clark told an audience at Farkas Hall at a recent event marking the milestone anniversary.
In a panel discussion moderated by Arts and Humanities Dean Sean Kelly, Donald P. and Katherine B. Loker Professor of English Stephanie Burt, Franklin D. and Florence Rosenblatt Professor of the Arts Vijay Iyer, writer Viet Thanh Nguyen, and The New Yorker staff writer Adam Gopnik discussed their relationship to the longstanding lecture series and its impact on arts and humanities fields.
“A healthy democracy depends, yes, on the rule of law and fair elections, but it depends just as much on having a flourishing, pluralistic culture,” said Gopnik. “The idea that you have had lectures on subjects that may seem esoteric, that are open to the public, that’s a simple idea of incredible value. When I look at the Norton Lectures I think about the power and fragility of pluralistic culture, and I think we have to be more committed to it now than we have ever been.”
Each of the panelists wrote a new foreword to a past Norton Lecture released this month by Harvard University Press. Iyer wrote on the 1939-40 lectures of Igor Stravinsky, Burt on the 1989-90 lectures of John Ashbery, Nguyen on the 1967-68 lectures of Jorge Luis Borges, and Gopnik on the 1956-57 lectures of Ben Shahn. Anne T. and Robert M. Bass Professor of English Louis Menand also wrote a forward to the 1992-93 lectures of Umberto Eco.
“The idea that you have had lectures on subjects that may seem esoteric, that are open to the public, that’s a simple idea of incredible value.”
Adam Gopnik
Nguyen, who delivered last year’s Norton Lectures, described the experience as “nerve-wracking,” jokingly calling it “the ultimate final exam” for an academic. The Pulitzer Prize-winning author of “The Sympathizer” said that, as a Vietnamese refugee, the invitation felt like a form of canonization, or entry into an elite cultural tradition, but that, for him, the series is significant less for its prestige and more for the way it has centered the voices of outsiders.
“The relationship to inclusion in the canon is really important, because a lot of people who are included in the canon and in the Norton series are people who come from the outside,” Nguyen said. “They’re often people who struggle with the very notion of culture and what it represents, culture as a mode of artistic possibility, intellectual accomplishment, but culture as a mode of power. That, in the end, to me, is what gives a Norton series its significance, is our recognition of the multivalent nature of the power of art.”
Burt said she thinks of the Norton Lectures — and Harvard at large — less as an instrument of canonization and more of a way to amplify the voices of artists.
“John Ashbery wanted to tell you who some of his favorite artists were. Harvard handed him a giant megaphone, and he said, ‘Hey, go read John Clare,’” Burt said. She recalled hearing Ashbery, in his 1989 talk, say that artists should draw inspiration from whatever obscure or eccentric figures excite them the most, rather than relying on the traditional “war horses” of literature like John Milton, T.S. Eliot, or Henry James.
“Art can come from anywhere,” Burt added. “You can make art anywhere, and you’re going to make more interesting art if you look for art by and about and for people who aren’t like you.”
Iyer, who recalled the sense of awe he felt at hearing jazz musician Herbie Hancock deliver the Norton Lectures in 2014, said the Norton Lectures also offer an opportunity for an institution like Harvard to learn something, too. Artists have a reach and an impact to the broader world that an academic institution does not always have, Iyer said.
“There is a sort of insularity that happens in the institution,” Iyer said. “When a moment like Herbie Hancock giving the Norton Lectures happens it’s like the floodgates open and Harvard learned something new about the world, and new relationships are formed, new truths are revealed.”
This year’s series, which starts Tuesday, will feature six lectures by award-winning “Hunger” (2008) and “12 Years a Slave” (2013) filmmaker Steve McQueen.
Bassler's pioneering research on the molecular mechanisms used by bacteria for intercellular communication has shown life-saving potential. University Professor is Princeton’s highest honor for faculty.
Bassler's pioneering research on the molecular mechanisms used by bacteria for intercellular communication has shown life-saving potential. University Professor is Princeton’s highest honor for faculty.