Reading view

There are new articles available, click to refresh the page.

Weight loss drugs protect heart patients, study suggests

Health

Weight loss drugs protect heart patients, study suggests

40% lower risk of hospitalization or death

Mass General Brigham Communications

3 min read
Hand holding semaglutide injection pen.

High-risk patients with heart failure had an over 40 percent lower risk of hospitalization or death after initiating weight loss drugs semaglutide or tirzepatide compared to placebo by proxy, according to a study out of Harvard-affiliated Mass General Brigham.

Specifically, researchers looked at heart failure with preserved ejection fraction (HFpEF), a condition where the heart’s ability to pump remains intact, yet the heart’s muscle has become so thick and stiff that the amount of blood being pumped doesn’t meet the body’s needs. This form of heart failure is especially common among people with obesity and Type 2 diabetes.

“Despite the widespread morbidity and mortality burden of HFpEF, current treatment options are limited,” said corresponding author Nils Krüger of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital and a postdoctoral research fellow at Harvard Medical School. “Both semaglutide and tirzepatide are well-known for their effects on weight loss and blood sugar control, but our study suggests they may also offer substantial benefits to patients with obesity and Type 2 diabetes by reducing adverse heart failure outcomes.”

By analyzing real-world data from over 90,000 HFpEF patients with obesity and Type 2 diabetes, researchers from MGB demonstrated that GLP-1 medications may significantly reduce the risk of hospitalization due to heart failure and all-cause mortality. Findings are published in JAMA and presented simultaneously at the European Society of Cardiology Congress.

Despite promising results from existing randomized controlled trials of semaglutide and tirzepatide in those with obesity-related HFpEF, regulatory authorities and professional societies have not approved or endorsed the use of these drugs for HFpEF, due in part to the studies’ relatively small sample sizes and unknown generalizability. The researchers therefore used data from three large U.S. insurance claims databases to emulate two previous, placebo-controlled trials of semaglutide and tirzepatide in new study populations that were an average of 19 times larger than those previously evaluated.

The researchers compared the one-year risk of heart failure hospitalization or death in new users of each GLP-1 drug to the risk of those outcomes in a “placebo” group of patients taking sitagliptin, a diabetes drug known to have no impact on HFpEF. After verifying the results of the previous, highly controlled studies, the researchers expanded their study population to make it more reflective of HFpEF cases in clinical practice, finding that overall, the drugs were associated with a greater than 40 percent reduction in heart failure hospitalization or all-cause mortality as compared with sitagliptin. Semaglutide and tirzepatide had similar effectiveness.

Notably, both drugs had acceptable safety profiles. In the future, the researchers hope to clarify the long-term impact of GLP-1 medications, the HFpEF subpopulations that may derive the most benefit from them, and whether the drugs are also effective in reducing other cardiovascular risks. 

“By using nationwide data and an innovative methodological approach, our team was able to expand the findings of previous trials to larger populations more representative of HFpEF patients treated in clinical practice,” Krüger said. “Our findings show that in the future, GLP-1 targeting medications could provide a much-needed treatment option for patients with heart failure.”

New particle detector passes the “standard candle” test

A new and powerful particle detector just passed a critical test in its goal to decipher the ingredients of the early universe.

The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. From the aftermath, scientists hope to reconstruct the properties of quark-gluon plasma (QGP) — a white-hot soup of subatomic particles known as quarks and gluons that is thought to have sprung into existence in the few microseconds following the Big Bang. Just as quickly, the mysterious plasma disappeared, cooling and combining to form the protons and neutrons that make up today’s ordinary matter.

Now, the sPHENIX detector has made a key measurement that proves it has the precision to help piece together the primordial properties of quark-gluon plasma.

In a paper in the Journal of High Energy Physics, scientists including physicists at MIT report that sPHENIX precisely measured the number and energy of particles that streamed out from gold ions that collided at close to the speed of light.

Straight ahead

This test is considered in physics to be a “standard candle,” meaning that the measurement is a well-established constant that can be used to gauge a detector’s precision.

In particular, sPHENIX successfully measured the number of charged particles that are produced when two gold ions collide, and determined how this number changes when the ions collide head-on, versus just glancing by. The detector’s measurements revealed that head-on collisions produced 10 times more charged particles, which were also 10 times more energetic, compared to less straight-on collisions.

“This indicates the detector works as it should,” says Gunther Roland, professor of physics at MIT, who is a member and former spokesperson for the sPHENIX Collaboration. “It’s as if you sent a new telescope up in space after you’ve spent 10 years building it, and it snaps the first picture. It’s not necessarily a picture of something completely new, but it proves that it’s now ready to start doing new science.”

“With this strong foundation, sPHENIX is well-positioned to advance the study of the quark-gluon plasma with greater precision and improved resolution,” adds Hao-Ren Jheng, a graduate student in physics at MIT and a lead co-author of the new paper. “Probing the evolution, structure, and properties of the QGP will help us reconstruct the conditions of the early universe.”

The paper’s co-authors are all members of the sPHENIX Collaboration, which comprises over 300 scientists from multiple institutions around the world, including Roland, Jheng, and physicists at MIT’s Bates Research and Engineering Center.

“Gone in an instant”

Particle colliders such as Brookhaven’s RHIC are designed to accelerate particles at “relativistic” speeds, meaning close to the speed of light. When these particles are flung around in opposite, circulating beams and brought back together, any smash-ups that occur can release an enormous amount of energy. In the right conditions, this energy can very briefly exist in the form of quark-gluon plasma — the same stuff that sprung out of the Big Bang.

Just as in the early universe, quark-gluon plasma doesn’t hang around for very long in particle colliders. If and when QGP is produced, it exists for just 10 to the minus 22, or about a sextillionth, of a second. In this moment, quark-gluon plasma is incredibly hot, up to several trillion degrees Celsius, and behaves as a “perfect fluid,” moving as one entity rather than as a collection of random particles. Almost immediately, this exotic behavior disappears, and the plasma cools and transitions into more ordinary particles such as protons and neutrons, which stream out from the main collision.

“You never see the QGP itself — you just see its ashes, so to speak, in the form of the particles that come from its decay,” Roland says. “With sPHENIX, we want to measure these particles to reconstruct the properties of the QGP, which is essentially gone in an instant.”

“One in a billion”

The sPHENIX detector is the next generation of Brookhaven’s original Pioneering High Energy Nuclear Interaction eXperiment, or PHENIX, which measured collisions of heavy ions generated by RHIC. In 2021, sPHENIX was installed in place of its predecessor, as a faster and more powerful version, designed to detect quark-gluon plasma’s more subtle and ephemeral signatures.

The detector itself is about the size of a two-story house and weighs around 1,000 tons. It sits at the intersection of RHIC’s two main collider beams, where relativistic particles, accelerated from opposite directions, meet and collide, producing particles that fly out into the detector. The sPHENIX detector is able to catch and measure 15,000 particle collisions per second, thanks to its novel, layered components, including the MVTX, or micro-vertex — a subdetector that was designed, built, and installed by scientists at MIT’s Bates Research and Engineering Center.

Together, the detector’s systems enable sPHENIX to act as a giant 3D camera that can track the number, energy, and paths of individual particles during an explosion of particles generated by a single collision.

“SPHENIX takes advantage of developments in detector technology since RHIC switched on 25 years ago, to collect data at the fastest possible rate,” says MIT postdoc Cameron Dean, who was a main contributor to the new study’s analysis. “This allows us to probe incredibly rare processes for the first time.”

In the fall of 2024, scientists ran the detector through the “standard candle” test to gauge its speed and precision. Over three weeks, they gathered data from sPHENIX as the main collider accelerated and smashed together beams of gold ions traveling at the speed of light. Their analysis of the data showed that sPHENIX accurately measured the number of charged particles produced in individual gold ion collisions, as well as the particles’ energies. What’s more, the detector was sensitive to a collision’s “head-on-ness,” and could observe that head-on collisions produced more particles with greater energy, compared to less direct collisions.

“This measurement provides clear evidence that the detector is functioning as intended,” Jheng says.

“The fun for sPHENIX is just beginning,” Dean adds. “We are currently back colliding particles and expect to do so for several more months. With all our data, we can look for the one-in-a-billion rare process that could give us insights on things like the density of QGP, the diffusion of particles through ultra-dense matter, and how much energy it takes to bind different particles together.”

This work was supported, in part, by the U.S. Department of Energy Office of Science, and the National Science Foundation.

© Credit: Courtesy of Brookhaven National Laboratory

The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. This image shows the installation of the inner hadronic calorimeter within the core of the sPHENIX superconducting solenoid magnet.

Advancing career and academic ambitions with MITx MicroMasters Program in Finance

For a long time, Satik Movsesyan envisioned a future of working in finance and also pursuing a full-time master’s degree program at the MIT Sloan School of Management. She says the MITx MicroMasters Program in Finance provides her with the ideal opportunity to directly enhance her career with courses developed and delivered by MIT Sloan faculty.

Movsesyan first began actively pursuing ways to connect with the MIT community as a first-year student in her undergraduate program at the American University of Armenia, where she majored in business with a concentration in accounting and finance. That’s when she discovered the MicroMasters Program in Finance. Led by MIT Open Learning and MIT Sloan, the program offers learners an opportunity to advance in the finance field through a rigorous, comprehensive online curriculum comprising foundational courses, mathematical methods, and advanced modeling. During her senior year, she started taking courses in the program, beginning with 15.516x (Financial Accounting).

“I saw completing the MicroMasters program as a way to accelerate my time at MIT offline, as well as to prepare me for the academic rigor,” says Movsesyan. “The program provides a way for me to streamline my studies, while also working toward transforming capital markets here in Armenia — in a way, also helping me to streamline my career.”

Movsesyan initially started as an intern at C-Quadrat Ampega Asset Management Armenia and was promoted to her current role of financial analyst. The firm is one of two pension asset managers in Armenia. Movsesyan credits the MicroMasters program with helping her to make deeper inferences in terms of analytical tasks and empowering her to create more enhanced dynamic models to support the efficient allocation of assets. Her learning has enabled her to build different valuation models for financial instruments. She is currently developing a portfolio management tool for her company.

“Although the courses are grounded deeply in theory, they never lack a perfect applicability component, which makes them very useful,” says Movsesyan. “Having MIT’s MicroMasters on a CV adds credibility as a professional, and your input becomes more valued by the employer.”

Movsesyan says that the program has helped her to develop resilience, as well as critical and analytical thinking. Her long-term goal is to become a portfolio manager and ultimately establish an asset management company, targeted at offering an extensive range of funds based on diverse risk-return preferences of investors, while promoting transparent and sustainable investment practices. 

“The knowledge I’ve gained from the variety of courses is a perfect blend which supports me day-to-day in building solutions to existing problems in asset management,” says Movsesyan.

In addition to being a learner in the program, Movsesyan serves as a community teaching assistant (CTA). After taking 15.516x, she became a CTA for that course, working with learners around the world. She says that this role of helping and supporting others requires constantly immersing herself in the course content, which also results in challenging herself and mastering the material.

“I think my story with the MITx MicroMasters Program is proof that no matter where you are — even if you’re in a small, developing country with limited resources — if you truly want to do something, you can achieve what you want,” says Movsesyan. “It’s an example for students around the world who also have transformative ideas and determination to take action. They can be a part of the MIT community.”

© Photo courtesy of Satik Movsesyan.

“My story with the MITx MicroMasters Program is proof that no matter where you are — even if you’re in a small, developing country with limited resources — if you truly want to do something, you can achieve what you want,” says Satik Movsesyan, who completed the MITx MicroMasters Program in Finance following her graduation from the American University of Armenia in 2024.

Brain cancer cells can be ‘reprogrammed’ to stop them from spreading

Computer illustration of a brain tumour

The finding could pave the way for a new type of treatment for glioblastoma, the most aggressive form of brain cancer, although extensive testing will be required before it can be trialled in patients. Glioblastoma is the most common type of brain cancer, with a five-year survival rate of just 15%.

The researchers, from the University of Cambridge, found that cancer cells rely on the flexibility of hyaluronic acid (HA) — a sugar-like polymer that makes up much of the brain’s supporting structure — to latch onto receptors on the surface of cancer cells to trigger their spread throughout the brain.

By locking HA molecules in place so that they lose this flexibility, the researchers were able to ‘reprogramme’ glioblastoma cells so they stopped moving and were unable to invade surrounding tissue. Their results are reported in the journal Royal Society Open Science.

“Fundamentally, hyaluronic acid molecules need to be flexible to bind to cancer cell receptors,” said Professor Melinda Duer from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research. “If you can stop hyaluronic acid being flexible, you can stop cancer cells from spreading. The remarkable thing is that we didn’t have to kill the cells — we simply changed their environment, and they gave up trying to escape and invade neighbouring tissue.”

Glioblastoma, like all brain cancers, is difficult to treat. Even when tumours are surgically removed, cancer cells that have already infiltrated the brain often cause regrowth within months. Current drug treatments struggle to penetrate the tumour mass, and radiotherapy can only delay, not prevent, recurrence of the cancer.

However, the approach developed by the Cambridge team does not target tumour cells directly, but instead attempts to change the tumour’s surrounding environment – the extracellular matrix – to stop its spread.

“Nobody has ever tried to change cancer outcomes by changing the matrix around the tumour,” said Duer. “This is the first example where a matrix-based therapy could be used to reprogramme cancer cells.”

Using nuclear magnetic resonance (NMR) spectroscopy, the team showed that HA molecules twist into shapes that allow them to bind strongly to CD44 — a receptor on cancer cells that drives invasion. When HA was cross-linked and ‘frozen’ into place, those signals were shut down.

The effect was seen even at low concentrations of HA, suggesting the cells were not being physically trapped but instead reprogrammed into a dormant state.

The study may also explain why glioblastoma often returns at the site of surgery. A build-up of fluid, or oedema, at the surgical site dilutes HA, making it more flexible and potentially encouraging cell invasion. By freezing HA in place, it could be possible to prevent recurrence.

“This could be a real opportunity to slow glioblastoma progression,” said Duer. “And because our approach doesn’t require drugs to enter every single cancer cell, it could in principle work for many solid tumours where the surrounding matrix drives invasion.

“Cancer cells behave the way they do in part because of their environment. If you change their environment, you can change the cells.”

The researchers are hoping to conduct further testing in animal models, which could lead to clinical trials in patients.

The research was supported in part by the European Research Council and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Melinda Duer is a Fellow of Robinson College, Cambridge.

Melinda Duer will be discussing her research on Saturday, 27 September, as part of the Cambridge Alumni Festival 2025

Reference:
Uliana Bashtanova, Agne Kuraite, Rakesh Rajan, Melinda J Duer. ‘Molecular flexibility of hyaluronic acid has a profound effect on invasion of cancer cells.’ Royal Society Open Science (2025). DOI: 10.1098/rsos.251036

Scientists have found a way to stop brain cancer cells spreading by essentially ‘freezing’ a key molecule in the brain.

This could be a real opportunity to slow glioblastoma progression
Melinda Duer
Computer illustration of a brain tumour

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

TF-NUS LEaRN 2025: Lessons on leadership, culture and community building

For university students, summer break is more than a time to relax – it is also an opportunity for learning beyond the classroom.

Jazmine Lin, a third-year NUS Political Science undergraduate, participated in the Temasek-Foundation – NUS Leadership Enrichment and Regional Networking (TF - NUS LEaRN) Programme 2025 during the recent summer break, discovering that it offered her the best of both worlds: time to recharge combined with enriching learning experiences. She was among the 59 university students from across Southeast Asia who were able to learn more about the region, develop their leadership capabilities collaboratively and form new friendships through the programme.

Immersing in Chiang Mai’s culture

The programme kicked off in May with a two-week immersion in Chiang Mai, Thailand, hosted by the Language Institute Chiang Mai University (CMU). Thirty students from NUS, Singapore Institute of Technology and Singapore University of Social Sciences had the opportunity to interact with local community leaders and participate in various leadership development workshops. Through field trips, they were able to learn more about the different communities, their unique cultural practices and identities, as well as how the locals tapped into their surrounding resources to make a living.

One such visit took students to Nai Suan (which means ‘In the Garden’ in Thai), a community enterprise in the Mae Rim district that upcycles fallen leaves into biodegradable bowls and plates. This initiative not only promotes sustainable living, but it is also a source of income for the locals who collect the leaves. Students were invited to experience the process themselves – from washing the leaves and removing their veins, to moulding them into the final products by using a hydraulic press.

In the Doi Saket district, students visited the Ban Baiboon Thai-Tai Lue Wisdom Learning Center, which aims to preserve the indigenous Tai Lue culture by offering homestays and tours, as well as workshops where visitors can try their hand at various traditional crafts. Through touring traditional Tai Lue houses, observing the process of handicraft-making and sampling local snacks, students gained first-hand experience of the community’s authentic way of life.

The Chiang Mai leg concluded with a Sustainable Environment Hackathon, where students applied their learnings to develop solutions to address regional environmental challenges. They pitched their ideas to the faculty members from CMU’s Faculty of Science, sparking a lively exchange of innovation and teamwork.

Deepening regional understanding in Singapore

Upon returning to Singapore in July for the next leg of the programme, the students were joined by 29 peers from 19 universities across Southeast Asia.

In the first week, the students focused on community leadership, which emphasised skills such as teamwork and active listening. Students from the Nanyang Technological University and Singapore Management University TF-LEaRN programme also participated in the activities, creating a more dynamic environment.

For the second week, students explored the three key themes of greenery, water and racial harmony, all of which are hallmarks of Singapore’s identity and intrinsically important to national development. By participating in leadership seminars, fireside chats with NUS students and alumni who shared their experiences in community leadership, as well as learning journeys to Marina Barrage, Ba’alwie Mosque and a Veggie Rescue at Little India activity, they gained insights into Singapore’s approach to urban sustainability and multiculturalism. Many overseas students shared how this approach contrasted with those from their home countries, engendering enriching discussions on the region’s shared challenges and diverse approaches.

In the final week, students took part in a futures thinking segment conducted by the Lee Kuan Yew School of Public Policy at NUS, where they were introduced to tools such as horizon scanning and scenario communication, which are useful in anticipating trends and challenges, as well as developing effective strategies.

They were later divided into groups and tasked with identifying and addressing a community development issue in a Southeast Asian country of their choice. Through prototyping, brainstorming and presenting their ideas, students honed their ability to collaborate across cultural and academic disciplinary boundaries.

The entire programme has been an eye-opening experience for Jazmine. “Through the various talks, lectures and learning journeys, I saw how community leadership can come in many different forms. It was interesting to witness how different ideas came to life in both countries and the experience was made richer with the perspectives and insights from our Southeast Asian buddies. But beyond all the learnings, what stayed with me the most were the friendships built, and I believe this will endure well beyond the programme.”

By NUS Global Relations Office

‘We mark your belonging here’

Campus & Community

‘We mark your belonging here’

Harvard President Alan Garber speaks during the Class of 2029 Convocation, in Tercentenary Theatre.

President Alan Garber welcomes the Class of 2029 during Convocation.

Photos by Veasey Conway/Harvard Staff Photographer

Christina Pazzanese

Harvard Staff Writer

4 min read

Garber urges Class of 2029 to teach, learn from one another, reject viewing world in simple binaries

Alan Garber can still recall arriving on campus as a first-year in 1973.

The University president told the Class of 2029 in his Convocation address Monday afternoon that it quickly becomes apparent to all that Harvard is that rare place that offers almost limitless opportunities to experiment and explore whatever intellectual pursuits or interests students have.

But, he counseled the group, they must take care to avoid overlooking the one resource that may very well turn out to be the most valuable and enduring — each other.

“Each of you is here to teach as you learn,” said Garber, an economist, physician, and healthcare policy expert to the students, faculty, and others gathered at Tercentenary Theatre. “You are here to share your experience and perspective so that our community can be one in which all people are welcomed, all ideas are given due consideration, and all beliefs are treated with respect.”

Held just before classes begin each fall semester, Convocation serves as the University’s official welcome to first-years marking the start of their new lives as undergraduates. Harvard officials addressed students, offered some tried-and-true wisdom about College life, and shared some of the University’s values, history, and traditions.

Garber told the class they share two qualities: All are exceptional students, and all are “capable of making interesting and unusual decisions, not always the ones that others would make.”

That kind of openness and creativity springs from a certain mindset, he said: “You reject ‘either/or.’ You are the kind of ‘both/and’ people that this institution has nurtured, empowered, and celebrated throughout its long history.”

While the first semester of College is exciting and a bit daunting, Garber advised students to resist the urge to seek refuge in the familiar. Instead, he said, embrace feeling uncomfortable and pursue new people and experiences that are unfamiliar, “consider the difficulties and challenges you encounter to be invitations to improve and ultimately to excel.”

Recounting his own undergraduate struggles at Harvard, Garber recalled one classmate who at first seemed brash and intellectually intimidating. But after he set aside his preconceptions about the classmate and took a chance, the two soon became pals, then roommates, and today remain longtime friends.

Class of 2029 fill Tercentenary Theatre.

First-year students attend Convocation.

First-year students sing Harvard’s alma mater song, “Fair Harvard”

Harvard’s alma mater “Fair Harvard” fills Tercentenary Theatre.

President Garber oins students for class photo.

Garber holds the Class of 2029 banner during the traditional group photo following Convocation.

“Some of these friendships will form easily and require little to no tending. Others will demand effort to take hold. Those are the ones that will evolve in ways you cannot anticipate — that will lead to debate and argument, conflict and reconciliation, growth and change,” Garber said. “Those are the ones worth pursuing intently because they will deepen your understanding and enlarge your spirit.”

The University’s 31st president noted that many of the students “had to surmount a plethora of obstacles to be part of this class. I know some of you worried that you would not be able to make the journey here — would not be able to become part of our community. We are so glad to see you.

“Harvard would not be Harvard if it did not include inquisitive, ambitious students from across the United States and around the world,” he said to widespread applause.

Making his debut as the new Danoff Dean of Harvard College, David Deming, Ph.D. ’10, urged students to view this period of technological disruption, with the rapid growth of AI and its impact on their future job prospects, not with dread but as a huge opportunity to be bold, dream big, and blaze new trails.

An economist who studies education policy, Deming was most recently academic dean at Harvard Kennedy School and served as a faculty dean at Kirkland House before starting in his new role July 1.

Other University officials joined Garber and Deming on stage, including the Rev. Matthew I. Potts, Pusey Minister in Harvard’s Memorial Church, who delivered the invocation; Dean of Undergraduate Education Amanda Claybaugh; Dean of Students Thomas Dunne; and Nekesa Straker, senior assistant dean of resident life and first-year students.

Student musical groups the Harvard University Band, the Kuumba Singers, and the Harvard Choruses all performed.

“Today, we mark much more than just your beginning here,” Garber said at the end of his address. “We mark your belonging here.”

Farming Minister and local MP tours Sainsbury Laboratory and sees leading Cambridge Agri-Tech research 

The visit brought together fundamental plant science research with crop and Agri-Tech researchers from across the University for a series of research demonstrations and a roundtable discussion. 

Mr Zeichner toured the award-winning facility, meeting researchers in the open-plan office and lab spaces, which foster collaboration and advances in multi-disciplinary research. 

The Minister saw exciting examples of foundational research, which have the potential to transform agriculture and ensure long term sustainability.  

The first demonstration was led by Dr Sebastian Schornack and PhD student Nicolas Garcia Hernandez, who are investigating the plant developmental processes. The Minister saw through the microscope how they are using beetroot pigments to enable us to see how fungi is colonising living plant roots. This research allows us to track and measure in real time how chemicals, soil tillage and environmental conditions impact this beneficial plant-microbe relationship.  

Mr Zeichner then visited the Lab’s microscopy room, and met with Dr Madelaine Bartlett and her colleague Terice Kelly. Dr Madelaine Bartlett's team researches the development of maize flowers (among other grass and cereal species) with a particular focus on the genetics behind these specialised flowers and future crop improvement. The team demonstrated how they image a maize flower on the Lab’s desktop scanning electron microscope. 

The Sainsbury Laboratory boasts its own Bee Room, where Dr Edwige Moyroud demonstrated how bumble bees are helping to reveal the characteristics of petal patterns that are most important for attracting pollinators. Dr Moyroud and her team are identifying the genes that plants use to produce patterns that attract pollinators by combining various research techniques, including experiments, modelling, microscopy and bee behaviour. 

Finally, overlooking Cambridge’ Botanic Gardens, academics from the Department of Plant Sciences and the Crop Science Centre presented on research into regenerative agriculture and using AI to measure and prevent crop disease.  

Professor Lynn Dicks presented on the latest findings of the H3 research on regenerative agriculture. Professor Dicks and colleagues, during this ongoing five-year project, have worked collaboratively with farming clusters in the UK to study the impacts of a transition to regenerative agriculture, which has so far has been shown to improve soil health and reduce the use of chemicals. 

Professor Eves-van Den Akker and his team, based at the University’s Crop Science Centre, have combined low-cost 3D printing of custom imaging machines with state-of-the-art deep-learning algorithms to make millions of measurements, of tens of thousands of parasites across hundreds of genotypes. They are now working with companies to translate this fundamental research, with the aim of accelerating their breeding programs for crop resistance to pests and disease. 

The visit concluded with a discussion of the UK’s leading strengths in Agri-Tech and crop science, and how the UK and Cambridge are an attractive place for researchers from around the world to work, and make exciting advances, with global impact. 

The University of Cambridge hosted a visit from local MP, and Farming Minister Daniel Zeichner MP, at the Sainsbury Laboratory.

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Depression linked to presence of immune cells in the brain’s protective layer

Silhouette photography of man

The discovery – found in a study in mice – sheds light on the role that inflammation can play in mood disorders and could help in the search for new treatments, in particular for those individuals for whom current treatments are ineffective.

Around 1 billion people will be diagnosed with a mood disorder such as depression or anxiety at some point in their life. While there may be many underlying causes, chronic inflammation – when the body’s immune system stays active for a long time, even when there is no infection or injury to fight – has been linked to depression. This suggests that the immune system may play an important role in the development of mood disorders.

Previous studies have highlighted how high levels of an immune cell known as a neutrophil, a type of white blood cell, are linked to the severity of depression. But how neutrophils contribute to symptoms of depression is currently unclear.

In research published today in Nature Communications, a team led by scientists at the University of Cambridge, UK, and the National Institute of Mental Health, USA, tested a hypothesis that chronic stress can lead to the release of neutrophils from bone marrow in the skull. These cells then collect in the meninges – membranes that cover and protect your brain and spinal cord – and contribute to symptoms of depression.

As it is not possible to test this hypothesis in humans, the team used mice exposed to chronic social stress. In this experiment, an ‘intruder’ mouse is introduced into the home cage of an aggressive resident mouse. The two have brief daily physical interactions and can otherwise see, smell, and hear each other.

The researchers found that prolonged exposure to this stressful environment led to a noticeable increase in levels of neutrophils in the meninges, and that this was linked to signs of depressive behaviour in the mice. Even after the stress ended, the neutrophils lasted longer in the meninges than they did in the blood. Analysis confirmed the researchers’ hypothesis that the meningeal neutrophils – which appeared subtly different from those found in the blood – originated in the skull.

Further analysis suggested that long-term stress triggered a type of immune system ‘alarm warning’ known as type I interferon signalling in the neutrophils. Blocking this pathway – in effect, switching off the alarm – reduced the number of neutrophils in the meninges and improved behaviour in the depressed mice. This pathway has previously been linked to depression – type 1 interferons are used to treat patients with hepatitis C, for example, but a known side effect of the medication is that it can cause severe depression during treatment.

Dr Stacey Kigar from the Department of Medicine at the University of Cambridge said: “Our work helps explain how chronic stress can lead to lasting changes in the brain’s immune environment, potentially contributing to depression. It also opens the door to possible new treatments that target the immune system rather than just brain chemistry.

“There’s a significant proportion of people for whom antidepressants don’t work, possibly as many as one in three patients. If we can figure out what's happening with the immune system, we may be able to alleviate or reduce depressive symptoms.”

The reason why there are high levels of neutrophils in the meninges is unclear. One explanation could be that they are recruited by microglia, a type of immune cell unique to the brain. Another possible explanation is that chronic stress may cause microhaemorrhages, tiny leaks in brain blood vessels, and that neutrophils – the body’s ‘first responders’ – arrive to fix the damage and prevent any further damage. These neutrophils then become more rigid, possibly getting stuck in brain capillaries and causing further inflammation in the brain.

Dr Mary-Ellen Lynall from the Department of Psychiatry at the University of Cambridge said: “We’ve long known that something is different about how neutrophils behave after stressful events, or during depression, but we didn’t know what these neutrophils were doing, where they were going, or how they might be affecting the brain and mind. Our findings show that these ‘first responder’ immune cells leave the skull bone marrow and travel to the brain, where they can influence mood and behaviour.

“Most people will have experienced how our immune systems can drive short-lived depression-like symptoms. When we are sick, for example with a cold or flu, we often lack energy and appetite, sleep more and withdraw from social contact. If the immune system is always in a heightened, pro-inflammatory state, it shouldn’t be too surprising if we experience longer-term problems with our mood.”

The findings could provide a useful signature, or ‘biomarker’, to help identify those patients whose mood disorders are related to inflammation. This could help in the search for better treatments. For example, a clinical trial of a potential new drug that targets inflammation of the brain in depression might appear to fail if trialled on a general cohort of people with depression, whereas using the biomarker to identify individuals whose depression is linked to inflammation could increase the likelihood of the trial succeeding.

The findings may also help explain why depression is a symptom common in other neurological disorders such as stroke and Alzheimer’s disease, as it may be the case that neutrophils are being released in response to the damage to the brain seen in these conditions. But it may also explain why depression is itself a risk factor for dementia in later life, if neutrophils can themselves trigger damage to brain cells.

The research was funded by the National Institute of Mental Health, Medical Research Council and National Institute for Health and Care Research Cambridge Biomedical Research Centre.

Reference
Kigar, SL et al. Chronic social defeat stress induces meningeal neutrophilia via type I interferon signaling in male mice. Nat Comms; 1 Sept 2025; DOI: 10.1038/s41467-025-62840-5

Immune cells released from bone marrow in the skull in response to chronic stress and adversity could play a key role in symptoms of depression and anxiety, say researchers.

There’s a significant proportion of people for whom antidepressants don’t work. If we can figure out what's happening with the immune system, we may be able to alleviate or reduce depressive symptoms
Stacey Kigar
Silhouette photography of man

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes
Licence type: 

Why employers want workers with high EQs

Work & Economy

Why employers want workers with high EQs

Illustration by Liz Zonarich/Harvard Staff

Liz Mineo

Harvard Staff Writer

6 min read

‘Future of Jobs’ report highlights value of emotional intelligence

A recent report on “The Future of Jobs” by the World Economic Forum found that while analytical thinking is still the most coveted skill among employers, several emotional intelligence skills (i.e., motivation, self-awareness, empathy, and active listening) rank among the top 10 in a list of 26 core competencies.

In this edited conversation with Ron Siegel, assistant professor of psychology at Harvard Medical School, he explains why emotional intelligence skills are crucial in the workplace, especially in the age of AI.


What’s emotional intelligence? Is it a different way of being smart?

It is a kind of being smart, but it’s not what we usually think of as being smart. In recent decades, psychologists who study intelligence have become aware that there are many different kinds of intelligence. You could think of somebody who has natural athletic ability as having a kind of body or coordination intelligence or somebody who has a natural math ability as having a good deal of mathematical intelligence, and so on.

When we look over human experience in the developed world, where many people have basic food, clothing, and shelter, there’s nonetheless a great deal of conflict and unhappiness. Most of this strife involves the challenges of working with our emotions as humans, and particularly the complexity of our reactions in relationships. Emotional intelligence is a particular skill of recognizing one’s own feelings, working with those feelings, and not just reacting in ways that are going to be problematic. It also involves recognizing the feelings that are arising in others, and then being able to work with others, to work out conflicts, or get along well with one another.

Why do employers consider emotional intelligence one of the top core skills needed to thrive in the workplace?

The importance of emotional competence comes from the observation in the business world, in academia, the military, and every human enterprise, that there are people who are highly competent in technical and analytical skills, but when they interact with others, projects stall. So many resources are wasted in emotional misunderstandings or in people’s difficulty with emotional regulation. We humans are grossly inefficient in trying to get things done because most of our energy is spent on trying to make sure we look good, or on making sure that people think of us in a certain way, or on getting triggered by one another. I suspect that business leaders have realized that it’s relatively easy to get technical expertise in almost anything, but to get people who can understand and get along with one another, that is a challenge. In many projects, there is a growing awareness that this skill is going to be the one that carries the day.

chart visualization

Can you talk about the evolution of the concept of emotional intelligence since publication of the 1995 book “Emotional Intelligence” by Daniel Goleman, Ph.D. ’74?

Humans have known about this for a long time. Western industrialized cultures have very much favored other forms of intelligence, like logical analytical ability, mathematical ability, and entrepreneurial skills over relational skills and the ability to connect with feelings and connect with one another. Over the years, psychologists have become more aware of a strong cultural bias toward certain kinds of intelligence and against other kinds of intelligence, and they have tried to rectify that by looking at emotional intelligence. And when Daniel Goleman wrote his landmark book, people started realizing that there are many people who may have high SAT and GRE scores but are not thriving in life or even succeeding in their work. And when we look at why that is, it turns out that they don’t know how to manage their own emotions or how to read other people’s emotions, and they don’t know how to get along effectively with other people, while other people with far lower GRE and SAT scores have skills to understand and read people and can get a team together and lead them to accomplish things and have great success. There’s a growing realization that emotional intelligence matters, even for external material, goal-oriented activities.

Are emotional intelligence skills relevant in the age of AI?

As people increasingly are interacting with chatbots rather than real human beings to get their work done, I suspect that authentic, connected human interactions are going to become more important. Humans are hardwired to be a social species­ — we long for connection to others. We hate the experience of being ostracized and pushed out of the group. That’s in our basic primate nature, and I suspect that as more of people’s lives are engaged in interactions with AI, even though it does a nice job of imitating human responses, that people will long for simple, natural responses. That’s my hope, anyway, that people will value genuine connection rather than preferring to spend time with chatbots because “My chatbot is so much more complimentary toward me than my spouse or is so much more willing to change its mind to accommodate my needs.” I’m hoping we don’t just go for the chatbots because they’re better at boosting our egos.

“As people increasingly are interacting with chatbots rather than real human beings to get their work done, I suspect that authentic, connected human interactions are going to become more important.”

What are the components of emotional intelligence? How can we become emotionally competent?

The first component is self-awareness, which means being conscious of our own thoughts, feelings, and what’s happening inside of us. It is the capacity to notice that every simple interaction stimulates myriad different emotions and associations to all the other moments in our life. The second big area is self-regulation, which is the ability to manage our emotions in a healthy way. It means that we’re able to feel the full range of our emotions and yet not be overwhelmed by them. The third big component is social awareness or empathy, and that’s noticing what’s going on in others. This means being free enough of self-preoccupation so that we can see that other people have needs, desires, fears, and hurts, and so we can respond to them in appropriate ways. And the fourth big component is social skills, which is the ability to work well in teams, to be able to solve conflicts and help the team to cooperate.

Emotional competence is key in our personal lives too. I’m a clinical psychologist by training and I know that most people are not struggling because they can’t figure out the answer to a technical question. They are struggling because they can’t figure out how to get along with their kids, their parents, their spouses, their siblings, their neighbors, or their friends. How do we stop hurting each other’s feelings and find a way to feel safely connected and love one another? That’s our big challenge.

Viewing art like an expert

Detail of a Wall Painting Fragment from the Villa at Boscotrecase, 10 BCE-1 BCE, from the early Roman Imperial period.

Detail of “Wall Painting Fragment from the Villa at Boscotrecase,” 10 B.C.E.-1 B.C.E.

Photos by Stephanie Mitchell/Harvard Staff Photographer

Arts & Culture

Viewing art like an expert

Sy Boles

Harvard Staff Writer

long read

Curators and conservators at the Harvard Art Museums zoom in on the tiny details that tell big stories about some of their favorite works

Looking at art can be intimidating for the untrained. Is this piece impressionist or surrealist? What, exactly, makes it worthy of hanging in a museum?

“Ultimately, it’s subjective,” Lynette Roth, the Daimler Curator of the Busch-Reisinger Museum, told the Gazette in 2023. “I can’t convince you to like something because I say, ‘This is a major artist of the 20th century’ — you might not be interested in that. But my experience has been that it will grow on you as you have more context.”

We asked specialists from the Harvard Art Museums to lend us their expertise to help develop that context. Below, they home in on the tiny details that make pieces of art important.


Sparrows get new perch

“Wall Painting Fragment from the Villa at Boscotrecase,” 10 B.C.E.-1 B.C.E.
Kate Smith stands beside Wall Painting Fragment from the Villa at Boscotrecase, 10 BCE-1 BCE, from the early Roman Imperial period. A close-up of a Wall Painting Fragment from the Villa at Boscotrecase, 10 BCE-1 BCE. Two small birds stand by water. The paint is aged but the colors are still bright.
Kate Smith with “Wall Painting Fragment from the Villa at Boscotrecase.”

These sparrows were painted high up on the wall of a villa near Naples, Italy, about 2,000 years ago. Though they have suffered some paint loss, they are still recognizable and so lifelike; standing in a puddle of water, one is drinking and splashing. The original wall was part of a grand villa made for the emperor’s grandson; the whole structure was buried by a volcanic eruption in 79 C.E. When the villa was discovered and excavated in the early 20th century, the recovered fragments went to various museums and this single piece came to Harvard, where it lived in storage for almost a century. When the curators decided to display this piece of decorated wall in our Roman galleries in 2014, I reattached flaking paint and removed accumulated grime from the surface, revealing the bright colors and the glossy, polished red and yellow surfaces.

The birds would not have been very visible up near the ceiling, they were minor decorative elements. Now that this piece of wall lives in the museum at eye level, visitors can have a close look. I love how the coarsely ground mineral pigments used to paint them glitter in the light, how jumpy and flighty and alert the birds seem.

Kate Smith, Senior Conservator of Paintings, Head of Paintings Lab


Retracing the creative process

“Leaping Antelopes,” c. 1745
Penley Knipe points to Leaping Antelopes, c. 1745.Leaping Antelopes, c. 1745.
Penley Knipe with “Leaping Antelopes.”

This small drawing from the Kota tradition of painting in India measures just 3½ by 7 inches. It has energetic antelopes leaping across it. As a paper conservator, I am tasked with the physical care of the various types of works on paper. What I love most is any and all evidence of the materials the artist may have used.

This drawing also has equally elegant swirls of ink, as the artist tests out various ink colors and dilutions. You can see many grays but there is also a bright orange squiggle and a chartreuse one as well — colors that don’t make an appearance as an antelope. One gets the impression that the paper is not only a mid-18th-century sheet of sketches where the artist works and reworks the prancing antelopes, but it is also a scratch pad. These details put us that much closer to the artist. Speaking of tiny details, don’t miss the small head at lower left as well. I find these small tidbits both delightful and informative.

Penley Knipe, Philip and Lynn Straus Senior Conservator of Works of Art on Paper and Head of Paper Lab


Try to look away

“Child from the Old Town,” Ernst Thoms, 1925
Lynette Roth, Daimler Curator of the Busch-Reisinger Museum, speaks about a favorite detail found in an object at the Harvard Art Museums at Harvard University. She is pictured in front of Child from the Old Town by Ernst Thoms.a detail of Child from the Old Town by Ernst Thoms.
Lynette Roth with “Child from the Old Town.”

Currently on view at the museums is a small painting with a monumental impact. In it, a child’s melancholic gaze is highlighted by the strong play of light and shadow on her forehead and around her mouth. The unnamed sitter is described in the work’s title only as an inhabitant of a city center, which we see behind her sketched thinly in oil paint.

In a period of economic and political instability in Germany after World War I, such areas were often plagued by housing instability and a lack of fresh green spaces for working-class families. By lending such dramatic contour to the young girl’s face — as if a spotlight were shining directly at her — Ernst Thoms makes her palpable and challenges us to consider the material circumstances of workers’ lives.

Lynette Roth, Daimler Curator of the Busch-Reisinger Museum


Echoes of love verse

“Portrait of Maharaja Kumar Sawant Singh of Kishangarh,” 1745
Janet O'Brien stands with a painting.
Janet O’Brien with “Portrait of Maharaja Kumar Sawant Singh of Kishangarh.”

“Inhabit the garden of love, sing of the garden of love. Nagar says: enter the beloved’s dwelling in the garden of love.

These are the words of Maharaja Sawant Singh, an 18th-century ruler of Kishangarh, Rajasthan, and a poet under the pen name Nagari Das.

In this portrait, the poet-king stands amidst pink roses in full bloom. Gazing down from the window above is his beloved. But my favorite tiny detail — and the most tender and touching one — is the female attendant holding the door ajar. With just the tip of her bejeweled nose and the edge of her red skirt visible, she reaches forward with a sprig of roses, inviting Sawant Singh to “enter the beloved’s dwelling in the garden of love.”

But these words do not accompany the painting. Rather, they are from one of his poems called the “Garden of Love” (or “ʿIshq Chaman”). Dedicated to the divine passion of Krishna for Radha, the poem is an expression of Sawant Singh’s ardent love for Bani Thani, a poet and singer, who is most likely the woman seated at the window.

Janet O’Brien, Calderwood Curatorial Fellow in South Asian and Islamic Art


Can you spot the tiny animal?

“Garden Carpet,” 18th century
A detail of the Garden Carpet, 18th century Persian textile.
A tiny animal is hidden in the 18th-century wool carpet.

The Islamic Art gallery currently displays a monumental Persian carpet. Dating back to the 18th century, this wool carpet is adorned with a design inspired by gardens. Although many Persian rugs reference gardens through botanical ornament, this example presents a formal garden layout known as the chahar bagh (four-part garden). Such gardens, planted with fruit trees and separated by axial water channels, were an important part of the palatial and urban complexes of the Islamic era in Iran, Central Asia, and later in India. On this carpet, a wide stream of water, intersected by narrower channels, runs through flowerbeds. Amongst this rich design, a tiny animal, possibly a goat, is asymmetrically placed in one of the flowerbeds. Often invisible to the unaware, the little goat appears to be a token left by the weavers of this carpet. Although we do not know the artisans who produced this carpet following an earlier established design, the tiny animal is a reminder of their existence and the liberties they took to insert their identity, only to be revealed to the keen eye.

Aysin Yoltar-Yildirim, Norma Jean Calderwood Curator of Islamic and Later Indian Art


Coin signed by its engraver

“Decadrachm of Syracuse,” Kimon, 405-400 B.C.E.
Laure Marest holds an ancient Greek coin.A close-up of an ancient Greek coin.
Laure Marest shares a favorite coin.

This silver coin minted in ancient Syracuse is truly remarkable. It is a superb example of miniature engraving. Although it is one of the largest ancient Greek denominations ever minted — worth 10 drachmai — it is only a third bigger in diameter than a U.S. dollar. Yet, the engraving is incredibly detailed: a four-horse chariot on the obverse — unfortunately not well-preserved on this specimen — and the head of the nymph Arethusa complete with a hairnet and jewelry on the reverse. Even more special is the fact that the engraver of the die — the punch used to strike the coin — signed his work! The letter K on the headband just above the forehead is his initial, and his full name is inscribed on the dolphin below her neck: KIMON. This is extremely rare. We only know of a few ancient die engravers by name and Kimon is the most famous and accomplished. There is something so moving about being able to refer to the artist by name, although we know almost nothing else about him. It is a link with this person who lived somewhere around Sicily over 2,400 years ago.

Laure Marest, Damarete Associate Curator of Ancient Coins


An instant classic

“Marsha,” Dawoud Bey, 1998
"Marsha," a photo dyptich by Dawoud Bey. A close-up of the edge of  a photograph showing the dye process of a large-format Polaroid image.
Dawoud Bey’s diptych is a large-format type of Polaroid.

Depending on the generation you were born into, you might recall the 2003 Outkast music hit “Hey Ya!” that chorused “shake it like a Polaroid picture.” Believe it or not, this diptych is also a Polaroid picture, but considerably larger. One of these “instant” photographs is closer to 20 inches by 24 inches in size and was literally pulled from its even larger traditional view camera, only a handful of which were ever made and distributed across the globe. The emblematic “squash” of chemistry along each side is an artifact of the sophisticated dye diffusion process.

A light-sensitive sheet is exposed inside of the camera. That same sheet is then squeegeed against a second sheet (coated with dye-receiving material) through reagent pods and motor-driven spreader rolls as the sandwich is pulled out of the camera. After roughly 1½ minutes pass, the two sheets are masterfully peeled apart and the second sheet exhibits the recorded image in color. Like magic! Today, the Harvard Collection of Historical Scientific Instruments has one of the original 20×24 cameras.

Tatiana Cole, Conservator of Photographs


Secret preserved in ancient mirror

“Large Eight-Lobed Mirror with Relief Decoration,” eighth century
Susan Costello holds an eight-lobed mirror from the 8th-century Tang dynasty.A close-up image of a mirror from the Tang Dynasty.
Susan Costello shares an eighth-century Chinese bronze mirror.

While examining an eighth-century Chinese bronze mirror under the microscope, I discovered impressions of a long-lost textile hidden among the layers of red, green, and blue corrosion. These pseudomorphs formed over centuries during burial, as the organic fibers decayed and were replaced by copper corrosion perfectly preserving the fabric down to individual fibers. They offer a rare glimpse into ancient textiles that would otherwise be lost to time.

Besides being fascinating, these textile pseudomorphs help recover part of the mirror’s lost narrative. We have no archaeological context to tell us where the mirror was found, who owned it, or how it was placed in the grave, but the impressions left behind speak volumes. Found on both the front and back of the mirror, the pseudomorphs suggest the object was once carefully wrapped in cloth. This was an object owned by a living person who valued it in both life and death.

Finding this unexpected human connection to the past moved me, and the fact that it was not the original textile that survived, but traces of it, preserved through a chemical transformation, makes it all the more compelling.

Susan Costello, Conservator of Objects and Sculpture


Transported to Pollock’s studio

“No. 2,” Jackson Pollock, 1950
Narayan Khandekar with "No. 2" by Jackson Pollack.A close-up of the edge of "No. 2" by Jackson Pollack.
Narayan Khandekar appreciates the marks of Pollock’s process.

This is a painting that is close to untouched condition with minimal conservation work, as if it has just left Betty Parsons Gallery. In this detail we can see the painting is stapled from the front to hold it onto a wooden stretcher. The canvas has drips and splashes of paint, and a single blue thread marks the selvedge. There is another selvedge on the opposite side, telling us this was the full width of the canvas roll. Knowing this, we can work out the steps of the painting’s creation. Pollock unrolled the canvas on the floor, and splashed paint onto the surface in his characteristic method. When he was finished, he cut the painting from the roll and, not wanting to lose any of the image, he stapled the canvas onto the stretcher from the front, sometimes through the paint. Almost all artists fold the canvas over the edge of the stretcher and attach it from the sides of back where it is out of sight, but that was not important to Pollock. This cluster of clues tells us so much from so little — it takes us from where we stand in the gallery back in time to watching Pollock at work in his studio.

Narayan Khandekar, Director of the Straus Center for Conservation and Technical Studies and Senior Conservation Scientist


Art meets mechanics

“Light Prop for an Electric Stage,” László Moholy-Nagy, 1930
Peter Murphy stands with "Light Prop for an Electric Stage (Light Space Modulator."A close-up on Light Prop for an Electric Stage (Light-Space Modulator)
Peter Murphy stands with “Light Prop for an Electric Stage.”

This icon of the Busch-Reisinger Museum is the pinnacle of László Moholy-Nagy’s experiments at the Bauhaus. Throughout his tenure as faculty at the influential school of art and design, Moholy-Nagy envisioned how to bring his sculpture to life. It was only in 1930 — two years after leaving the Bauhaus — that he was able to realize his vision with the help of the German electronics company AEG, an engineer named Stefan Sebok, and a mechanic named Otto Ball. Through this collaboration, the Light Prop was able to come to life and move. Since then, the sculpture has struggled with malfunctions and damages, leading to many of its original parts being replaced. Except for the motor from Boston Gear, it’s nearly impossible to determine if a part is a replica. One easily bypassed detail, however, is original to the sculpture: a metal plaque on its platform that features Otto Ball’s name and logo. For me, this subtle trace is crucial not only for understanding the Light Prop’s history, but for recognizing that this groundbreaking sculpture has involved many hands across its many lives.

Peter Murphy, Stefan Engelhorn Curatorial Fellow in the Busch-Reisinger Museum

Probability theorem gets quantum makeover after 250 years

How likely you think something is to happen depends on what you already believe about the circumstances. That is the simple concept behind Bayes’ rule, an approach to calculating probabilities, first proposed in 1763. Now, an international team of researchers has shown how Bayes’ rule operates in the quantum world. 
 
“I would say it is a breakthrough in mathematical physics,” said Professor Valerio Scarani, Deputy Director and Principal Investigator at the Centre for Quantum Technologies, and member of the team. His co-authors on the work published on 28 August 2025 in Physical Review Letters are Assistant Professor Ge Bai at the Hong Kong University of Science and Technology in China, and Professor Francesco Buscemi at Nagoya University in Japan. 

“Bayes’ rule has been helping us make smarter guesses for 250 years. Now we have taught it some quantum tricks,” said Prof Buscemi.

While researchers before them had proposed quantum analogues for Bayes’ rule, they are the first to derive a quantum Bayes’ rule from a fundamental principle.  

Conditional probability 

Bayes’ rule is named for Thomas Bayes, who first defined his rules for conditional probabilities in ‘An Essay Towards Solving a Problem in the Doctrine of Chances’.  

Consider a case in which a person tests positive for flu. They may have suspected they were sick, but this new information would change how they think about their health. Bayes’ rule provides a method to calculate the probability of flu conditioned not only on the test result and the chances of the test giving a wrong answer, but also on the individual’s initial beliefs. 

Bayes’ rule interprets probabilities as expressing degrees of belief in an event. This has been long debated, since some statisticians think that probabilities should be “objective” and not based on beliefs. However, in situations where beliefs are involved, Bayes’ rule is accepted as a guide for reasoning. This is why it has found widespread use from medical diagnosis and weather prediction to data science and machine learning. 

Principle of minimum change 

When calculating probabilities with Bayes’ rule, the principle of minimum change is obeyed. Mathematically, the principle of minimum change minimises the distance between the joint probability distributions of the initial and updated belief. Intuitively, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. In the case of the flu test, for example, a negative test would not imply that the person is healthy, but rather that they are less likely to have the flu. 

In their work, Prof Scarani, who is also from NUS Department of Physics, Asst Prof Bai, and Prof Buscemi began with a quantum analogue to the minimum change principle. They quantified change in terms of quantum fidelity, which is a measure of the closeness between quantum states.  

Researchers always thought a quantum Bayes’ rule should exist because quantum states define probabilities. For example, the quantum state of a particle provides the probability of it being found at different locations. The goal is to determine the whole quantum state, but the particle is only found at one location when a measurement is performed. This new information will then update the belief, boosting the probability around that location.  

The team derived their quantum Bayes’ rule by maximising the fidelity between two objects that represent the forward and the reverse process, in analogy with a classical joint probability distribution. Maximising fidelity is equivalent to minimising change. They found in some cases their equations matched the Petz recovery map, which was proposed by Dénes Petz in the 1980s and was later identified as one of the most likely candidates for the quantum Bayes’ rule based just on its properties.  

“This is the first time we have derived it from a higher principle, which could be a validation for using the Petz map,” said Prof Scarani. The Petz map has potential applications in quantum computing for tasks such as quantum error correction and machine learning. The team plans to explore whether applying the minimum change principle to other quantum measures might reveal other solutions.  

Understanding shocks to welfare systems

In an unhappy coincidence, the Covid-19 pandemic and Angie Jo’s doctoral studies in political science both began in 2019. Paradoxically, this global catastrophe helped define her primary research thrust.

As countries reacted with unprecedented fiscal measures to protect their citizens from economic collapse, Jo MCP ’19 discerned striking patterns among these interventions: Nations typically seen as the least generous on social welfare were suddenly deploying the most dramatic emergency responses.

“I wanted to understand why countries like the U.S., which famously offer minimal state support, suddenly mobilize an enormous emergency response to a crisis — only to let it vanish after the crisis passes,” says Jo.

Driven by this interest, Jo launched into a comparative exploration of welfare states that forms the backbone of her doctoral research. Her work examines how different types of welfare regimes respond to collective crises, and whether these responses lead to lasting institutional reforms or merely temporary patches.

A mismatch in investments

Jo’s research focuses on a particular subset of advanced industrialized democracies — countries like the United States, United Kingdom, Canada, and Australia — that political economists classify as “liberal welfare regimes.” These nations stand in contrast to the “social democratic welfare regimes” exemplified by Scandinavian countries.

“In everyday times, citizens in countries like Denmark or Sweden are already well-protected by a deep and comprehensive welfare state,” Jo explains. “When something like Covid hits, these countries were largely able to use the social policy tools and administrative infrastructure they already had, such as subsidized childcare and short-time work schemes that prevent mass layoffs.”

Liberal welfare regimes, however, exhibit a different pattern. During normal periods, "government assistance is viewed by many as the last resort,” Jo observes. “It’s means-tested and minimal, and the responsibility to manage risk is put on the individual.”

Yet when Covid struck, these same governments “spent historically unprecedented amounts on emergency aid to citizens, including stimulus checks, expanded unemployment insurance, child tax credits, grants, and debt forbearance that might normally have faced backlash from many Americans as government ‘handouts.’”

This stark contrast — minimal investment in social safety nets during normal times followed by massive crisis spending — lies at the heart of Jo’s inquiry. “What struck me was the mismatch: The U.S. invests so little in social welfare at baseline, but when crisis hits, it can suddenly unleash massive aid — just not in ways that stick. So what happens when the next crisis comes?”

From architecture to political economy

Jo took a winding path to studying welfare states in crisis. Born in South Korea, she moved with her family to California at age 3 as her parents sought an American education for their children. After moving back to Korea for high school, she attended Harvard University, where she initially focused on art and architecture.

“I thought I’d be an artist,” Jo recalls, “but I always had many interests, and I was very aware of different countries and different political systems, because we were moving around a lot.”

While studying architecture at Harvard, Jo’s academic focus pivoted.

“I realized that most of the decisions around how things get built, whether it’s a building or a city or infrastructure, are made by the government or by powerful private actors,” she explains. “The architect is the artist’s hand that is commissioned to execute, but the decisions behind it, I realized, were what interested me more.”

After a year working in macroeconomics research at a hedge fund, Jo found herself drawn to questions in political economy. “While I didn’t find the zero-sum game of finance compelling, I really wanted to understand the interactions between markets and governments that lay behind the trades,” she says.

Jo decided to pursue a master’s degree in city planning at MIT, where she studied the political economy of master-planning new cities as a form of industrial policy in China and South Korea, before transitioning to the political science PhD program. Her research focus shifted dramatically when the Covid-19 pandemic struck.

“It was the first time I realized, wow, these wealthy Western democracies have serious problems, too,” Jo says. “They are not dealing well with this pandemic and the structural inequalities and the deep tensions that have always been part of some of these societies, but are being tested even further by the enormity of this shock.”

The costs of crisis response

One of Jo’s key insights challenges conventional wisdom about fiscal conservatism. The assumption that keeping government small saves money in the long run may be fundamentally flawed when considering crisis response.

“What I’m exploring in my research is the irony that the less you invest in a capable, effective and well-resourced government, the more that backfires when a crisis inevitably hits and you have to patch up the holes,” Jo argues. “You’re not saving money; you’re deferring the cost.”

This inefficiency becomes particularly apparent when examining how different countries deployed aid during Covid. Countries like Denmark, with robust data systems connecting health records, employment information, and family data, could target assistance with precision. The United States, by contrast, relied on blunter instruments.

“If your system isn’t built to deliver aid in normal times, it won’t suddenly work well under pressure,” Jo explains. “The U.S. had to invent entire programs from scratch overnight — and many were clumsy, inefficient, or regressive.”

There is also a political aspect to this constraint. “Not only do liberal welfare countries lack the infrastructure to address crises, they are often governed by powerful constituencies that do not want to build it — they deliberately choose to enact temporary benefits that are precisely designed to fade,” Jo argues. “This perpetuates a cycle where short-term compensations are employed from crisis to crisis, constraining the permanent expansion of the welfare state.”

Missed opportunities

Jo’s dissertation also examines whether crises provide opportunities for institutional reform. Her second paper focuses on the 2008 financial crisis in the United States, and the Hardest Hit Fund, a program that allocated federal money to state housing finance agencies to prevent foreclosures.

“I ask why, with hundreds of millions in federal aid and few strings attached, state agencies ultimately helped so few underwater homeowners shed unmanageable debt burdens,” Jo says. “The money and the mandate were there — the transformative capacity wasn’t.”

Some states used the funds to pursue ambitious policy interventions, such as restructuring mortgage debt to permanently reduce homeowners’ principal and interest burdens. However, most opted for temporary solutions like helping borrowers make up missed payments, while preserving their original contract. Partisan politics, financial interests, and status quo bias are most likely responsible for these varying state strategies, Jo believes.

She sees this as “another case of the choice that governments have between throwing money at the problem as a temporary Band-Aid solution, or using a crisis as an opportunity to pursue more ambitious, deeper reforms that help people more sustainably in the long run.”

The significance of crisis response research

For Jo, understanding how welfare states respond to crises is not just an academic exercise, but a matter of profound human consequence.

“When there’s an event like the financial crisis or Covid, the scale of suffering and the welfare gap that emerges is devastating,” Jo emphasizes. “I believe political science should be actively studying these rare episodes, rather than disregarding them as once-in-a-century anomalies.”

Her research carries implications for how we think about welfare state design and crisis preparedness. As Jo notes, the most vulnerable members of society — “people who are unbanked, undocumented, people who have low or no tax liability because they don’t make enough, immigrants or those who don’t speak English or don’t have access to the internet or are unhoused” — are often invisible to relief systems.

As Jo prepares for her career in academia, she is motivated to apply her political science training to address such failures. “We’re going to have more crises, whether pandemics, AI, climate disasters, or financial shocks,” Jo warns. “Finding better ways to cover those people is essential, and is not something that our current welfare state — or our politics — are designed to handle.”

© Photo: Minda de Gunzburg Center for European Studies/Harvard University

“I wanted to understand why countries like the U.S., which famously offer minimal state support, suddenly mobilize an enormous emergency response to a crisis — only to let it vanish after the crisis passes,” says PhD candidate Angie Jo.

MIT researchers develop AI tool to improve flu vaccine strain selection

Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.

This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.

To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.

Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”

An open-access report on the study was published today in Nature Medicine.

The future of flu

VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.

The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)

In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population. 

For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.

So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.

Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity. 

Outpacing evolution

“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi. 

VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families

“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator. 

“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”

Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.

© Image: Alex Gagne

The VaxSeer system developed at MIT can predict dominant flu strains and identify the most protective vaccine candidates. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond. Pictured: Senior author Regina Barzilay (left) and first author Wenxian Shi.

MIT researchers develop AI tool to improve flu vaccine strain selection

Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.

This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.

To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.

Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”

An open-access report on the study was published today in Nature Medicine.

The future of flu

VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.

The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)

In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population. 

For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.

So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.

Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity. 

Outpacing evolution

“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi. 

VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families

“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator. 

“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”

Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.

© Image: Alex Gagne

The VaxSeer system developed at MIT can predict dominant flu strains and identify the most protective vaccine candidates. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond. Pictured: Senior author Regina Barzilay (left) and first author Wenxian Shi.

New self-assembling material could be the key to recyclable EV batteries

Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.

A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.

The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.

“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”

Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.

Better batteries

There’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.

That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.

To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.

“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”

When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.

“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”

The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.

“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.

When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.

“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”

Validating a new approach

Cho says the material is a proof of concept that demonstrates the recycle-first approach.

“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”

Cho also sees a lot of room for optimizing the material’s performance with further experiments.

Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.

“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”

Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.

“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”

The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy. This work was performed, in part, using the MIT.nano Characterization facilities.

© Image: Courtesy of the researchers, edited by MIT News

A depiction of batteries made with MIT researchers’ new electrolyte material, which is made from a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic Kevlar.

Why countries trade with each other while fighting

In World War II, Britain was fighting for its survival against German aerial bombardment. Yet Britain was importing dyes from Germany at the same time. This sounds curious, to put it mildly. How can two countries at war with each other also be trading goods?

Examples of this abound, actually. Britain also traded with its enemies for almost all of World War I. India and Pakistan conducted trade with each other during the First Kashmir War, from 1947 to 1949, and during the India-Pakistan War of 1965. Croatia and then-Yugoslavia traded with each other while fighting in 1992.

“States do in fact trade with their enemies during wars,” says MIT political scientist Mariya Grinberg. “There is a lot of variation in which products get traded, and in which wars, and there are differences in how long trade lasts into a war. But it does happen.”

Indeed, as Grinberg has found, state leaders tend to calculate whether trade can give them an advantage by boosting their own economies while not supplying their enemies with anything too useful in the near term.

“At its heart, wartime trade is all about the tradeoff between military benefits and economic costs,” Grinberg says. “Severing trade denies the enemy access to your products that could increase their military capabilities, but it also incurs a cost to you because you’re losing trade and neutral states could take over your long-term market share.” Therefore, many countries try trading with their wartime foes.

Grinberg explores this topic in a groundbreaking new book, the first one on the subject, “Trade in War: Economic Cooperation Across Enemy Lines,” published this month by Cornell University Press. It is also the first book by Grinberg, an assistant professor of political science at MIT.

Calculating time and utility

“Trade in War” has its roots in research Grinberg started as a doctoral student at the University of Chicago, where she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.

Grinberg wanted to learn about it comprehensively, so, as she quips, “I did what academics usually do: I went to the work of historians and said, ‘Historians, what have you got for me?’”

Modern wartime trading began during the Crimean War, which pitted Russia against France, Britain, the Ottoman Empire, and other allies. Before the war’s start in 1854, France had paid for many Russian goods that could not be shipped because ice in the Baltic Sea was late to thaw. To rescue its produce, France then persuaded Britain and Russia to adopt “neutral rights,” codified in the 1856 Declaration of Paris, which formalized the idea that goods in wartime could be shipped via neutral parties (sometimes acting as intermediaries for warring countries).

“This mental image that everyone has, that we don’t trade with our enemies during war, is actually an artifact of the world without any neutral rights,” Grinberg says. “Once we develop neutral rights, all bets are off, and now we have wartime trade.”

Overall, Grinberg’s systematic analysis of wartime trade shows that it needs to be understood on the level of particular goods. During wartime, states calculate how much it would hurt their own economies to stop trade of certain items; how useful specific products would be to enemies during war, and in what time frame; and how long a war is going to last.

“There are two conditions under which we can see wartime trade,” Grinberg says. “Trade is permitted when it does not help the enemy win the war, and it’s permitted when ending it would damage the state’s long-term economic security, beyond the current war.”

Therefore a state might export diamonds, knowing an adversary would need to resell such products over time to finance any military activities. Conversely, states will not trade products that can quickly convert into military use.

“The tradeoff is not the same for all products,” Grinberg says. “All products can be converted into something of military utility, but they vary in how long that takes. If I’m expecting to fight a short war, things that take a long time for my opponent to convert into military capabilities won’t help them win the current war, so they’re safer to trade.” Moreover, she adds, “States tend to prioritize maintaining their long-term economic stability, as long as the stakes don’t hit too close to home.”

This calculus helps explain some seemingly inexplicable wartime trade decisions. In 1917, three years into World War I, Germany started trading dyes to Britain. As it happens, dyes have military uses, for example as coatings for equipment. And World War I, infamously, was lasting far beyond initial expectations. But as of 1917, German planners thought the introduction of unrestricted submarine warfare would bring the war to a halt in their favor within a few months, so they approved the dye exports. That calculation was wrong, but it fits the framework Grinberg has developed.

States: Usually wrong about the length of wars

“Trade in War” has received praise from other scholars in the field. Michael Mastanduno of Dartmouth College has said the book “is a masterful contribution to our understanding of how states manage trade-offs across economics and security in foreign policy.”

For her part, Grinberg notes that her work holds multiple implications for international relations — one being that trade relationships do not prevent hostilities from unfolding, as some have theorized.

“We can’t expect even strong trade relations to deter a conflict,” Grinberg says. “On the other hand, when we learn our assumptions about the world are not necessarily correct, we can try to find different levers to deter war.”

Grinberg has also observed that states are not good, by any measure, at projecting how long they will be at war.

“States very infrequently get forecasts about the length of war right,” Grinberg says. That fact has formed the basis of a second, ongoing Grinberg book project.

“Now I’m studying why states go to war unprepared, why they think their wars are going to end quickly,” Grinberg says. “If people just read history, they will learn almost all of human history works against this assumption.”

At the same time, Grinberg thinks there is much more that scholars could learn specifically about trade and economic relations among warring countries — and hopes her book will spur additional work on the subject.

“I’m almost certain that I’ve only just begun to scratch the surface with this book,” she says. 

© Credit: Courtesy of Cornell University Press, and MIT Political Science

In research Grinberg started as a doctoral student, she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.

Pitch It! 2025: Re-imagining the Singapore story

From bustling hawker centres to iconic historical architecture, Singapore’s rich cultural heritage holds powerful potential for storytelling. In celebration of the nation’s 60th year of independence, Pitch It! 2025 shines a spotlight on the theme “The Singapore Story”. Organised annually by the NUS Communications and New Media Society (CNM Society) since 2013, Pitch It! is a nationwide competition that encourages tertiary students to unleash their creativity and inspire social change through the use of diverse forms of media.

This year’s edition, held from April to July, was organised in partnership with Mediacorp’s Bloomr.SG (a content creator network to foster creative talents), the Singapore Heritage Society, and the Singapore Film Society. Featuring the problem statement “How can we enhance the telling of The Singapore Story – celebrating its colonial landmarks, traditional music, kampong life, and hawker food – in a way that fosters unity, pride and cross-cultural appreciation?”, Pitch It! 2025 challenged participants to develop compelling advertising or social media campaigns to bring these stories to life.

Learning to tell cultural stories

The four-month journey saw 24 student teams from polytechnics and universities competing for up to S$3000 worth of cash prizes. To kickstart the creative process, participants attended two highly anticipated masterclasses.

The first, co-led by Mr Han Ming Guang from the Singapore Heritage Society and Ms Priyanka Nair from the Singapore Film Society, highlighted the importance of authenticity in presenting Singapore’s cultural narratives and the role of youth in shaping national memory. Participants were encouraged to think critically about how campaigns could move beyond nostalgia to foster inclusivity, representation, and long-term relevance. Drawing from their own experiences in heritage conservation, Mr Han and Ms Nair shared the challenges they faced in raising public awareness and offered practical insights into the strategies and campaign approaches they employed to address them.

In the second masterclass, Mr Diogo Martins and Ms Denise Tan from Mediacorp’s Bloomr.SG introduced participants to the anatomy of a successful digital campaign. Using real-life case studies, participants explored how to integrate social media trends with emotive storytelling to drive public engagement. They came away with a deeper understanding of how creativity, data, and empathy intersect to create impactful media campaigns in today’s fast-paced digital landscape. Renee Chew, a Year 2 Communications and New Media student from the NUS Faculty of Arts and Social Sciences (FASS), said, “The masterclasses were incredibly engaging and helped us to develop our campaign effectively.”

From concept to campaign

Armed with new insights, teams conducted research and fieldwork and developed campaign concepts ranging from a video series to interactive heritage trails. Following the preliminary round, five standout teams were shortlisted to present their campaigns at the Grand Finals, where they received constructive feedback from industry judges and had the opportunity to refine their proposals ahead of the final pitch.

The Pitch It! 2025 Grand Finals took place in July, with teams Ixora, Wing Wing, Need Compass and Singapore FM presenting their campaigns to a distinguished panel of industry judges comprising the speakers from the earlier masterclasses – Mr Han, Ms Nair, Mr Martins and Ms Lisa Low from Mediacorp’s Bloomr.SG.

The top prize went to Team Wing Wing for their campaign #HearOurSingaporeStory, which centred on the power of sound to connect generations. At unique phone booths located near historic sites such as the Victoria Concert Hall, Clarke Quay and the Esplanade, seniors and youths can share personal stories about Singapore’s iconic landmarks and traditions – creating more authentic and emotionally resonant stories which can be shared and appreciated across generations. Mr Han highlighted how the team had a long-term vision to sustain their campaign beyond their initial proposal and had good environmental scanning. This was one of many reasons why their pitch stood out from the rest.

“It was really fun coming up with bold, creative ideas to bring the Singapore Story to life,” shared Shernice Feng, a second-year Communication Studies student from the Nanyang Technological University’s Wee Kim Wee School of Communication and Information and a member of the winning team. “I found the experience very fulfilling as it gave us a platform to explore our own interests and perspectives.”

Also featured at the Grand Finals was a panel discussion on “Heritage, Media and Inclusive Narratives” with Mr Han, Ms Nair and Dr Jinna Tay, Senior Lecturer from the FASS CNM Department. The panel explored key challenges in heritage communication, including how to represent culture responsibly amid commercial interests, adapting to shifting media consumption patterns, sustaining long-term interest in heritage beyond one-off campaigns, and finding a balance between data-driven content strategies and meaningful storytelling.

As Pitch It! 2025 drew to a close, students had the opportunity to network with speakers and partners, engaging in meaningful conversations that extended well beyond the formal programme.

Preparing our youth for the real world

The competition not only showcased the creativity of Singapore’s youth but also provided a meaningful learning experience – connecting students with industry mentors and giving them a glimpse into the realities of campaign development.

Chieng Josiah, CNM Society Vice-President in charge of external relations and events, and Year 2 CNM Student, was proud of his organising team, which included Project Directors Year 4 CNM student Ho Ee Hsuen and Year 2 CNM student Tran Nguyen Thao Anh, for persevering through the academic year to plan this major competition.

He shared, “It has been a memorable experience planning and improving on this year’s edition to commemorate SG60 with our culture-focused and media-focused masterclasses and Grand Finals. It was also heartwarming to see our participants from various schools connect with industry experts and experienced individuals in the local cultural and heritage scene. We hope Pitch It! continues to inspire our aspiring communications professionals!”

“Pitch It! is a great opportunity for students to tackle real-world industry challenges,” noted Mr Martins. “Through the process, they gain valuable experience in aggregating insights to form coherent solutions. It also helps build their confidence – in public speaking, pitching ideas effectively, and competing at a professional level.”

By the CNM Society at the NUS Faculty of Arts and Social Sciences

Racing against antibiotic resistance 

Health

Racing against antibiotic resistance 

Experiments for antibiotics development using superbacteria in petri dishes.

Sy Boles

Harvard Staff Writer

long read

Scientists fear funding cuts will slow momentum in ongoing battle with evolving bacteria

Urgent Matters series

A series exploring how research is rising to major challenges in health and society

In 2023, more than 2.4 million cases of syphilis, gonorrhea, and chlamydia were diagnosed in the U.S. Though that number is high, it’s actually an improvement, according to the Centers for Disease Control and Prevention: The number of sexually transmitted infections, or STIs, decreased 1.8 percent overall from 2022 to 2023, with gonorrhea decreasing the most (7.2 percent).

But the number of STI diagnoses is only one part of the problem.

One treatment for STIs is doxycycline. It has been prescribed as a prophylactic for gonorrhea, recommended as a treatment for chlamydia since 2020, and used to treat syphilis during shortages of the preferred treatment, benzathine penicillin. But bacteria are living organisms, and like all living organisms, they evolve. Over time, they develop resistance mechanisms to the antibiotics we create to kill them. And according to Harvard immunologist Yonatan Grad, resistance to doxycycline is growing rapidly in the bacteria that cause gonorrhea.

“The increased use of doxycycline has, as we might have expected, selected for drug resistance,” Grad said. 

The pattern of bacteria evolving to overcome our best treatments is one of medicine’s most fundamental problems. Since the introduction of penicillin in the 1940s, antibiotics have radically transformed what’s possible in medicine, far beyond treatments for STIs. They can knock out the bacteria behind everything from urinary tract infections to meningitis to sepsis from infected wounds. But every antibiotic faces the same fate: As soon as it enters use, bacteria begin evolving to survive it.

The scope of the problem is staggering. Doctors wrote 252 million antibiotic prescriptions in 2023 in the U.S. That’s 756 prescriptions for every 1,000 people, up from 613 per 1,000 people in 2020. According to the CDC, more than 2.8 million antimicrobial infections occur each year in the U.S., and more than 35,000 people die, as a result of antimicrobial-resistant (AMR) infections.

Yanatan Grad.
Veasey Conway/Harvard Staff Photographer

“I think of antibiotics as infrastructure.”

Yonatan Grad

For researchers like Grad, the endless battle against the clock can be a bit like a game of high-stakes Whac-a-Mole — tracking antibiotic resistance, figuring out how it works, and developing new kinds of drugs before the bacteria can catch up. 

“Being able to treat these infections underlies so many aspects of medicine — urinary tract infections, caring for people who are immunocompromised, preventing surgical infections and treating them if they arise, and on and on,” said Grad. “This is foundational for modern clinical medicine and public health. Antibiotics are the support, the scaffolding on which medicine depends.”

Hold or release new drugs?

Grad’s research shows how quickly resistance can develop. In research described in a July letter in the New England Journal of Medicine, Grad and colleagues evaluated more than 14,000 genome sequences from Neisseria gonorrhoeae, the bacteria that causes gonorrhea, and found that carriage of a gene that confers resistance to tetracyclines — the class of antibiotics to which doxycycline belongs — shot up from 10 percent in 2020 to more than 30 percent in 2024.

Fortunately, doxycycline remains effective as a post-exposure prophylaxis for syphilis and chlamydia. It’s an open question why some pathogens are quicker to develop resistance than others. The urgency varies by organism, Grad said, with some, like Mycobacterium tuberculosis, the cause of tuberculosis, and Pseudomonas aeruginosa, showing “extremely drug-resistant or totally drug-resistant strains” that leave doctors facing untreatable infections.

The findings raise alarm bells, or at least questions, in doctors’ offices around the country: As bacteria develop resistance to tried-and-true antibiotics, when should new drugs be introduced for maximal utility before the bacteria inevitably outwit them, too? Traditional stewardship practice has recommended holding back new drugs until the old ones stop working. But 2023 research from Grad’s lab has challenged that approach. In mathematical models evaluating strategies for introducing a new antibiotic for gonorrhea, Grad found that the strategy of keeping the new antibiotics in reserve saw antibiotic resistance reach 5 percent much sooner than quickly introducing the antibiotic or using it in combination with the existing drug.

Lifesaving progress halted

Extra time could be critical for Amory Houghton Professor of Chemistry Andrew Myers, whose lab has been developing new antibiotics, including ones that target gonorrhea, for more than 30 years.

“Most of the antibiotics in our ‘modern’ arsenal are some 50 years old and no longer work against a lot of the pathogens that are emerging in hospitals and even in the community,” Myers said. “It’s a huge problem and it’s not as well appreciated as I think it should be.”

Andrew Myers.
File photo by Stephanie Mitchell/Harvard Staff Photographer

“In my opinion, we can absolutely win the game — temporarily.”

Andrew Myers

Many antibiotics work by targeting and inhibiting bacterial ribosome, the central machinery that translates the instructions in RNA into a protein readout. Ribosomes are “fantastically complex” 3D shapes, Myers said. Creating new antibiotics means inventing new chemical compounds that can bind like puzzle pieces into their grooves and protrusions. 

“My lab will spend quite a lot of time, sometimes years, to develop the chemistry — to invent the chemistry — that allows us to prepare new members of these classes of antibiotics,” Myers said. “And then we spend years making quite literally thousands of different members of the class, and then we evaluate them. Do they kill bacteria? Do they kill bacteria that are resistant to existing antibiotics? We’ve been incredibly successful with this, one antibiotic class after another. The strategy works.” 

But it’s also in danger. The Trump administration ended a National Institutes of Health grant to Myers’ lab for the development of lincosamides, a class of antibiotics whose last approved member, clindamycin, dates to 1970. A second terminated NIH grant may kill a promising new antibiotic on the cusp of further development. Myers’ lab has created a new molecule that has proven effective in killing Klebsiella Pneumoniae and E. coli, both identified by the World Health Organization as among the highest priority pathogens. Without continued funding, the molecule may not make it to the clinical trial phase and may never become an approved drug. 

“A delusion among people is that these decisions can simply be reversed and these NIH grants restored,” Myers said. “That’s not true. The damage is real, and it’s irreversible in some cases.”

chart visualization

Carrying on Paul Farmer’s legacy

The funding cuts extend beyond individual labs to a global health infrastructure. Carole Mitnick, a professor of global health and social medicine at Harvard Medical School, studies multidrug-resistant tuberculosis (MDR-TB) and has watched about 79 percent of USAID funding for global TB support get slashed this year.

“In the Democratic Republic of Congo, in Sierra Leone, and no doubt elsewhere, we’ve seen stocks of lifesaving anti-TB drugs sitting in warehouses, expiring, because programs that would have delivered them have been canceled or staff who would have collected them have been abruptly fired,” she said. “Not only is it immediately deadly and cruel not to deliver these lifesaving cures, but it sets the scene for more antimicrobial resistance by not delivering complete treatments. And it very clearly wastes U.S. taxpayer money to invest in the purchase of these drugs and let them sit in warehouses and expire.”

Mitnick’s work on multidrug-resistant TB, a form of antimicrobial resistance, builds on the legacy of Paul Farmer, the late Harvard professor and Partners In Health co-founder who revolutionized MDR-TB treatment by rejecting utilitarian approaches that wrote off the most vulnerable patients.

“Getting to know Paul and having him advise me, initially on my master’s thesis and ultimately on my doctoral dissertation, gave me a new framework,” Mitnick said. “It allowed me the freedom to use a social justice framework and to say that actually our research should be motivated by who’s suffering the greatest. How do we blend the research, which we’re very well placed to do at Harvard, with direct service and trying to reach the populations who are most marginalized? That shape is still very much in place and still informing the choices that several researchers in our department make in Paul’s legacy.” 

Carole Mitnick.
Veasey Conway/Harvard Staff Photographer

“Our research should be motivated by who’s suffering the greatest.”

Carole Mitnick

Globally, about 500,000 new people are estimated to have MDR-TB or its even heartier relative, extensively drug-resistant TB, each year. MDR-TB caused an estimated 150,000 deaths worldwide in 2023. TB is the poster child for pathogen characteristics and social conditions that favor selection for drug-resistant mutants. In a single case of TB, the bacteria population comprises bacteria at different stages of growth and in different environments of the body, requiring distinct drugs that can attach to each of these forms. Multidrug treatment regimens are long (measured in months, not days) and toxic, making them difficult for people to complete. And in the absence of any incentives or requirements, there’s a long lag between developing new drugs and developing tests that can detect resistance to those drugs. Consequently, treatment is often delivered without any information about resistance, in turn generating more resistance.

The fight against MDR-TB has an unlikely new ally: Nerdfighters, the fan group of prominent video bloggers John and Hank Green — or, more specifically, a subset of that fandom calling themselves TBFighters. John Green’s 2024 book, “Everything is Tuberculosis,” raised awareness about the prohibitive cost of TB diagnostic tests. 

Mitnick said that in the acknowledgments, Green called his book a sort of love letter to Paul Farmer. “Paul didn’t directly introduce John to TB, but it really is Paul’s legacy that took John Green to Sierra Leone, and then he met this young man named Henry who had multidrug-resistant tuberculosis. It awakened in John the awareness that actually TB was not a disease of the past, but a disease very much of the present.” 

The TBFighters energized an existing coalition movement to reduce the cost of testing for TB and other diseases from about $10 per test to about $5 per test, based on estimates that $5 covered the cost of manufacturing plus a profit, even at lower sales volumes.

“It wasn’t until John Green and the TBFighters entered the fray in 2023 that we made any headway: The manufacturer announced a reduction of about 20 percent on the price of one TB test,” Mitnick said. “So not a full win, but a partial win.”

Despite the challenges, researchers remain cautiously optimistic. “In my opinion, we can absolutely win the game — temporarily,” said Myers. “Whatever we develop, bacteria will find a way to outwit us. But I’m optimistic that the molecules that we’re making could have a clinical lifetime of many decades, maybe even as long as 100 years, if they’re used prudently.” 

Grad sees his work more like the construction crews that repair the city sidewalk or maintain bridges. “I think of antibiotics as infrastructure,” he said. “These tools that we use to maintain our health require continual investment.” 

What makes us sleepy during the day?

Man sleeping at desk.
Health

What makes us sleepy during the day?

Research links by-products of steroid hormone to excessive daytime sleepiness

Jacqueline Mitchell

BIMDC Communications

3 min read

A new study sheds light on the biological underpinnings of excessive daytime sleepiness, a persistent and inappropriate urge to fall asleep during the day — during work, at meals, even mid-conversation — that interferes with daily functioning.

The findings, published in The Lancet eMedicine, open the door to exploring how nutrition, lifestyle, and environmental exposures interact with genetic and biological processes to affect alertness.

The findings add weight to the idea that excessive daytime sleepiness isn’t just the result of too little sleep.

“Recent studies identified genetic variants associated with excessive daytime sleepiness, but genetics explains only a small part of the story,” said co-corresponding author Tamar Sofer, director of Biostatistics and Bioinformatics at the Cardiovascular Institute at Beth Israel Deaconess Medical Center, and an associate professor at Harvard T.H. Chan School of Public Health and Harvard Medical School. “We wanted to identify biomarkers that can give stronger insights into the mechanisms of excessive daytime sleepiness and help explain why some people experience persistent sleepiness even when their sleep habits seem healthy.”

Investigators from Harvard-affiliated BIDMC and Brigham and Women’s Hospital turned to metabolite analysis to better understand the biology behind excessive daytime sleepiness. Metabolites are small molecules produced as the body carries out its normal functions, from synthesizing hormones, to metabolizing nutrients to clearing environmental toxins. By measuring these metabolites researchers created a profile of excessive daytime sleepiness.

The scientists analyzed blood levels of 877 metabolites in samples taken from more than 6,000 individuals in the Hispanic Community Health Study/Study of Latinos (HCHS/SOL), a long-running study sponsored by the National Institutes of Health since 2006. When they cross-referenced these data with participants’ self-reported measures of sleepiness on an official survey, investigators identified seven metabolites that were significantly linked with higher levels of excessive daytime sleepiness.

The seven metabolites turned out to be involved in the production of steroids and other biological processes already implicated in excessive daytime sleepiness. When the investigators looked only at data from male participants, an additional three metabolites were identified, suggesting there might be sex-based biological differences in how excessive daytime sleepiness manifests.

The findings add weight to the idea that excessive daytime sleepiness isn’t just the result of too little sleep but can reflect physiological circumstances that might someday be diagnosed through blood tests or treated through targeted interventions.

“As we learn what’s happening biologically, we are beginning to understand how and why EDS occurs, the early signs that someone might have it, and what we can do to help patients,” said lead author Tariq Faquih, a postdoctoral research fellow in Sofer’s lab, the lab of Heming Wang at BWH, and a fellow in medicine at HMS. “These insights could eventually lead to new strategies for preventing or managing sleep disorders that include daytime sleepiness as a major symptom.”


This research was supported in part by the National Institutes of Health and the National Institute on Aging.

Locally produced proteins help mitochondria function

Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.

Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.

The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

How to detect local protein production

For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.

Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.

Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.

Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.

The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.

“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”

Two protein groups are made at mitochondria

Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.

One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.

Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.

Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.

The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.

The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.

In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.

“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”

© Image: Devin Powell/Whitehead Institute

Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

Locally produced proteins help mitochondria function

Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.

Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.

The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

How to detect local protein production

For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.

Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.

Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.

Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.

The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.

“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”

Two protein groups are made at mitochondria

Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.

One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.

Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.

Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.

The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.

The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.

In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.

“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”

© Image: Devin Powell/Whitehead Institute

Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

SHASS announces appointments of new program and section heads for 2025-26

The MIT School of Humanities, Arts, and Social Sciences announced leadership changes in three of its academic units for the 2025-26 academic year.

“We have an excellent cohort of leaders coming in,” says Agustín Rayo, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences. “I very much look forward to working with them and welcoming them into the school's leadership team.”

Sandy Alexandre will serve as head of MIT Literature. Alexandre is an associate professor of literature and served as co-head of the section in 2024-25. Her research spans the late 19th-century to present-day Black American literature and culture. Her first book, “The Properties of Violence: Claims to Ownership in Representations of Lynching,” uses the history of American lynching violence as a framework to understand matters concerning displacement, property ownership, and the American pastoral ideology in a literary context. Her work thoughtfully explores how literature envisions ecologies of people, places, and objects as recurring echoes of racial violence, resonating across the long arc of U.S. history. She earned a bachelor’s degree in English language and literature from Dartmouth College and a master’s and PhD in English from the University of Virginia.

Manduhai Buyandelger will serve as director of the Program in Women’s and Gender Studies. A professor of anthropology, Buyandelger’s research seeks to find solutions for achieving more-integrated (and less-violent) lives for humans and non-humans by examining the politics of multi-species care and exploitation, urbanization, and how diverse material and spiritual realities interact and shape the experiences of different beings. By examining urban multi-species coexistence in different places in Mongolia, the United States, Japan, and elsewhere, her study probes possibilities for co-cultivating an integrated multi-species existence. She is also developing an anthro-engineering project with the MIT Department of Nuclear Science and Engineering (NSE) to explore pathways to decarbonization in Mongolia by examining user-centric design and responding to political and cultural constraints on clean-energy issues. She offers a transdisciplinary course with NSE, 21A.S01 (Anthro-Engineering: Decarbonization at the Million Person Scale), in collaboration with her colleagues in Mongolia’s capital, Ulaanbaatar. She has written two books on religion, gender, and politics in post-socialist Mongolia: “Tragic Spirits: Shamanism, Gender, and Memory in Contemporary Mongolia” (University of Chicago Press, 2013) and “A Thousand Steps to the Parliament: Constructing Electable Women in Mongolia” (University of Chicago Press, 2022). Her essays have appeared in American Ethnologist, Journal of Royal Anthropological Association, Inner Asia, and Annual Review of Anthropology. She earned a BA in literature and linguistics and an MA in philology from the National University of Mongolia, and a PhD in social anthropology from Harvard University.

Eden Medina PhD ’05 will serve as head of the Program in Science, Technology, and Society. A professor of science, technology, and society, Medina studies the relationship of science, technology, and processes of political change in Latin America. She is the author of “Cybernetic Revolutionaries: Technology and Politics in Allende's Chile” (MIT Press, 2011), which won the 2012 Edelstein Prize for best book on the history of technology and the 2012 Computer History Museum Prize for best book on the history of computing. Her co-edited volume “Beyond Imported Magic: Essays on Science, Technology, and Society in Latin America” (MIT Press, 2014) received the Amsterdamska Award from the European Society for the Study of Science and Technology (2016). In addition to her writings, Medina co-curated the exhibition “How to Design a Revolution: The Chilean Road to Design,” which opened in 2023 at the Centro Cultural La Moneda in Santiago, Chile, and is currently on display at the design museum Disseny Hub in Barcelona, Spain. She holds a PhD in the history and social study of science and technology from MIT and a master’s degree in studies of law from Yale Law School. She worked as an electrical engineer prior to starting her graduate studies.

Joining the SHASS leadership team are (left to right) Sandy Alexandre, Manduhai Buyandelger, and Eden Medina.

Fikile Brushett named director of MIT chemical engineering practice school

Fikile R. Brushett, a Ralph Landau Professor of Chemical Engineering Practice, was named director of MIT’s David H. Koch School of Chemical Engineering Practice, effective July 1. In this role, Brushett will lead one of MIT’s most innovative and distinctive educational programs.

Brushett joined the chemical engineering faculty in 2012 and has been a deeply engaged member of the department. An internationally recognized leader in the field of energy storage, his research advances the science and engineering of electrochemical technologies for a sustainable energy economy. He is particularly interested in the fundamental processes that define the performance, cost, and lifetime of present-day and next-generation electrochemical systems. In addition to his research, Brushett has served as a first-year undergraduate advisor, as a member of the department’s graduate admissions committee, and on MIT’s Committee on the Undergraduate Program.

“Fik’s scholarly excellence and broad service position him perfectly to take on this new challenge,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering (ChemE). “His role as practice school director reflects not only his technical expertise, but his deep commitment to preparing students for meaningful, impactful careers. I’m confident he will lead the practice school with the same spirit of excellence and innovation that has defined the program for generations.”

Brushett succeeds T. Alan Hatton, a Ralph Landau Professor of Chemical Engineering Practice Post-Tenure, who directed the practice school for 36 years. For many, Hatton’s name is synonymous with the program. When he became director in 1989, only a handful of major chemical companies hosted stations.

“I realized that focusing on one industry segment was not sustainable and did not reflect the breadth of a chemical engineering education,” Hatton recalls. “So I worked to modernize the experience for students and have it reflect the many ways chemical engineers practice in the modern world.”

Under Hatton’s leadership, the practice school expanded globally and across industries, providing students with opportunities to work on diverse technologies in a wide range of locations. He pioneered the model of recruiting new companies each year, allowing many more firms to participate while also spreading costs across a broader sponsor base. He also introduced an intensive, hands-on project management course at MIT during Independent Activities Period, which has become a valuable complement to students’ station work and future careers.

Value for students and industry

The practice school benefits not only students, but also the companies that host them. By embedding teams directly into manufacturing plants and R&D centers, businesses gain fresh perspectives on critical technical challenges, coupled with the analytical rigor of MIT-trained problem-solvers. Many sponsors report that projects completed by practice school students have yielded measurable cost savings, process improvements, and even new opportunities for product innovation.

For manufacturing industries, where efficiency, safety, and sustainability are paramount, the program provides actionable insights that help companies strengthen competitiveness and accelerate growth. The model creates a unique partnership: students gain true real-world training, while companies benefit from MIT expertise and the creativity of the next generation of chemical engineers.

A century of hands-on learning

Founded in 1916 by MIT chemical engineering alumnus Arthur D. Little and Professor William Walker, with funding from George Eastman of Eastman Kodak, the practice school was designed to add a practical dimension to chemical engineering education. The first five sites — all in the Northeast — focused on traditional chemical industries working on dyes, abrasives, solvents, and fuels.

Today, the program remains unique in higher education. Students consult with companies worldwide across fields ranging from food and pharmaceuticals to energy and finance, tackling some of industry’s toughest challenges. More than a hundred years after its founding, the practice school continues to embody MIT’s commitment to hands-on, problem-driven learning that transforms both students and the industries they serve.

The practice school experience is part of ChemE’s MSCEP and PhD/ScDCEP programs. After coursework for each program is completed, a student attends practice school stations at host company sites. A group of six to 10 students spends two months each at two stations; each station experience includes teams of two or three students working on a month-long project, where they will prepare formal talks, scope of work, and a final report for the host company. Recent stations include Evonik in Marl, Germany; AstraZeneca in Gaithersburg, Maryland; EGA in Dubai, UAE; AspenTech in Bedford, Massachusetts; and Shell Technology Center and Dimensional Energy in Houston, Texas.

© Photo: Lillie Paquette

Professor Fikile Brushett

New method could monitor corrosion and cracking in a nuclear reactor

MIT researchers have developed a technique that enables real-time, 3D monitoring of corrosion, cracking, and other material failure processes inside a nuclear reactor environment.

This could allow engineers and scientists to design safer nuclear reactors that also deliver higher performance for applications like electricity generation and naval vessel propulsion.

During their experiments, the researchers utilized extremely powerful X-rays to mimic the behavior of neutrons interacting with a material inside a nuclear reactor.

They found that adding a buffer layer of silicon dioxide between the material and its substrate, and keeping the material under the X-ray beam for a longer period of time, improves the stability of the sample. This allows for real-time monitoring of material failure processes.

By reconstructing 3D image data on the structure of a material as it fails, researchers could design more resilient materials that can better withstand the stress caused by irradiation inside a nuclear reactor.

“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor. It also means the materials will take longer to fail, so we can get more use out of a nuclear reactor than we do now. The technique we’ve demonstrated here allows to push the boundary in understanding how materials fail in real-time,” says Ericmoore Jossou, who has shared appointments in the Department of Nuclear Science and Engineering (NSE), where he is the John Clark Hardwick Professor, and the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.

Jossou, senior author of a study on this technique, is joined on the paper by lead author David Simonne, an NSE postdoc; Riley Hultquist, a graduate student in NSE; Jiangtao Zhao, of the European Synchrotron; and Andrea Resta, of Synchrotron SOLEIL. The research was published Tuesday by the journal Scripta Materiala.

“Only with this technique can we measure strain with a nanoscale resolution during corrosion processes. Our goal is to bring such novel ideas to the nuclear science community while using synchrotrons both as an X-ray probe and radiation source,” adds Simonne.

Real-time imaging

Studying real-time failure of materials used in advanced nuclear reactors has long been a goal of Jossou’s research group.

Usually, researchers can only learn about such material failures after the fact, by removing the material from its environment and imaging it with a high-resolution instrument.

“We are interested in watching the process as it happens. If we can do that, we can follow the material from beginning to end and see when and how it fails. That helps us understand a material much better,” he says.

They simulate the process by firing an extremely focused X-ray beam at a sample to mimic the environment inside a nuclear reactor. The researchers must use a special type of high-intensity X-ray, which is only found in a handful of experimental facilities worldwide.

For these experiments they studied nickel, a material incorporated into alloys that are commonly used in advanced nuclear reactors. But before they could start the X-ray equipment, they had to prepare a sample.

To do this, the researchers used a process called solid state dewetting, which involves putting a thin film of the material onto a substrate and heating it to an extremely high temperature in a furnace until it transforms into single crystals.

“We thought making the samples was going to be a walk in the park, but it wasn’t,” Jossou says.

As the nickel heated up, it interacted with the silicon substrate and formed a new chemical compound, essentially derailing the entire experiment. After much trial-and-error, the researchers found that adding a thin layer of silicon dioxide between the nickel and substrate prevented this reaction.

But when crystals formed on top of the buffer layer, they were highly strained. This means the individual atoms had moved slightly to new positions, causing distortions in the crystal structure.

Phase retrieval algorithms can typically recover the 3D size and shape of a crystal in real-time, but if there is too much strain in the material, the algorithms will fail.

However, the team was surprised to find that keeping the X-ray beam trained on the sample for a longer period of time caused the strain to slowly relax, due to the silicon buffer layer. After a few extra minutes of X-rays, the sample was stable enough that they could utilize phase retrieval algorithms to accurately recover the 3D shape and size of the crystal.

“No one had been able to do that before. Now that we can make this crystal, we can image electrochemical processes like corrosion in real time, watching the crystal fail in 3D under conditions that are very similar to inside a nuclear reactor. This has far-reaching impacts,” he says.

They experimented with a different substrate, such as niobium doped strontium titanate, and found that only a silicon dioxide buffered silicon wafer created this unique effect.

An unexpected result

As they fine-tuned the experiment, the researchers discovered something else.

They could also use the X-ray beam to precisely control the amount of strain in the material, which could have implications for the development of microelectronics.

In the microelectronics community, engineers often introduce strain to deform a material’s crystal structure in a way that boosts its electrical or optical properties.

“With our technique, engineers can use X-rays to tune the strain in microelectronics while they are manufacturing them. While this was not our goal with these experiments, it is like getting two results for the price of one,” he adds.

In the future, the researchers want to apply this technique to more complex materials like steel and other metal alloys used in nuclear reactors and aerospace applications. They also want to see how changing the thickness of the silicon dioxide buffer layer impacts their ability to control the strain in a crystal sample.

“This discovery is significant for two reasons. First, it provides fundamental insight into how nanoscale materials respond to radiation — a question of growing importance for energy technologies, microelectronics, and quantum materials. Second, it highlights the critical role of the substrate in strain relaxation, showing that the supporting surface can determine whether particles retain or release strain when exposed to focused X-ray beams,” says Edwin Fohtung, an associate professor at the Rensselaer Polytechnic Institute, who was not involved with this work.

This work was funded, in part, by the MIT Faculty Startup Fund and the U.S. Department of Energy. The sample preparation was carried out, in part, at the MIT.nano facilities.

© Image: iStock; MIT News

“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor,” says Ericmoore Jossou.

Rising temperatures intensify supercell thunderstorms in Europe

In a groundbreaking study, researchers from the University of Bern and ETH Zurich have shown how climate change is intensifying supercell thunderstorms in Europe. At a global temperature increase of 3 degrees Celsius, these powerful storms are expected to occur more frequently, especially in the Alpine region.

Solving evolutionary mystery of how humans came to walk upright

Science & Tech

Solving evolutionary mystery of how humans came to walk upright

Gayani Senevirathne and Terence Capellini holding models of pelvises.

Gayani Senevirathne (left) holds the shorter, wider human pelvis, which evolved from the longer upper hipbones of primates, which Terence Capellini is displaying.

Niles Singer/Harvard Staff Photographer

Kermit Pattison

Harvard Staff Writer

6 min read

New study identifies genetic, developmental shifts that resculpted pelvis, setting ancestors apart from other primates

The pelvis is often called the keystone of upright locomotion. More than any other part of our lower body, it has been radically altered over millions of years, allowing our ancestors to become the bipeds who trekked and settled across the planet.

But just how evolution accomplished this extreme makeover has remained a mystery. Now a new study in the journal Nature led by Harvard scientists reveals two key genetic changes that remodeled the pelvis and enabled our bizarre habit of walking on two legs.

“What we’ve done here is demonstrate that in human evolution there was a complete mechanistic shift,” said Terence Capellini, professor and chair of the Department of Human Evolutionary Biology and senior author of the paper. “There’s no parallel to that in other primates. The evolution of novelty — the transition from fins to limbs or the development of bat wings from fingers — often involves massive shifts in how developmental growth occurs. Here we see humans are doing the same thing, but for their pelves.”

Anatomists have long known that the human pelvis is unique among primates. The upper hipbones, or ilia, of chimpanzees, bonobos, and gorillas — our closest relatives — are tall, narrow, and oriented flat front to back. From the side they look like thin blades. The geometry of the ape pelvis anchors large muscles for climbing.

In humans, the hipbones have rotated to the sides to form a bowl shape (in fact, the word “pelvis” derives from the Latin word for basin). Our flaring hipbones provide attachments for the muscles that allow us to maintain balance as we shift our weight from one leg to the other while walking and running.

In their new paper, the international team of researchers identified some of the key genetic and developmental shifts that radically resculpted the quadrupedal ape pelvis into a bipedal one.

“What we have tried to do is integrate different approaches to get a complete story about how the pelvis developed over time,” said Gayani Senevirathne, a postdoctoral fellow in Capellini’s lab and study lead author.

Senevirathne analyzed 128 samples of embryonic tissues from humans and nearly two dozen other primate species from museums in the U.S. and Europe. These collections included century-old specimens mounted on glass slides or preserved in jars.

The researchers also studied human embryonic tissues collected by the Birth Defects Research Laboratory at the University of Washington. They took CT scans and analyzed histology (the microscopic structure of tissues) to reveal the anatomy of the pelvis during early stages of development.

“The work that Gayani did was a tour de force,” said Capellini. “This was like five projects in one.”

The researchers discovered that evolution reshaped the human pelvis in two major steps. First, it shifted a growth plate by 90 degrees to make the human ilium wide instead of tall. Later, another shift altered the timeline of embryonic bone formation.

Most bones of the lower body take shape through a process that begins when cartilage cells form on growth plates aligned along the long axis of the growing bone. This cartilage later hardens into bone in a process called ossification.

In the early stages of development, the human iliac growth plate formed with growth aligned head-to-tail just as it did in other primates. But by day 53, the growth plates in humans evolved to radically shift perpendicularly from the original axis — thus shortening and broadening the hipbone.

“Looking at the pelvis, that wasn’t on my radar,” said Capellini. “I was expecting a stepwise progression for shortening it and then widening it. But the histology really revealed that it actually flipped 90 degrees — making it short and wide all at the same time.”

The authors suggest that these changes began with reorientation of growth plates around the time that our ancestors branched from the African apes, estimated to be between 5 million and 8 million years ago.

Another major change involved the timeline of bone formation.

Most bones form along a primary ossification center in the middle of the bone shaft.

In humans, however, the ilia do something quite different. Ossification begins in the rear of the sacrum and spreads radially. This mineralization remains restricted to the peripheral layer and ossification of the interior is delayed by 16 weeks compared to other primates — allowing the bone to maintain its shape as it grows and fundamentally changing the geometry.

“Embryonically, at 10 weeks you have a pelvis,” said Capellini as he sketched on a whiteboard. “It looks like this — basin-shaped.”

To identify the molecular forces that drove this shift, Senevirathne employed techniques such as single-cell multiomics and spatial transcriptomics. The team identified more than 300 genes at work, including three with outsized roles — SOX9 and PTH1R (controlling the growth plate shift), and RUNX2 (controlling the change in ossification).

The importance of these genes was underscored in diseases caused by their malfunction. For example, a mutation in SOX9 causes campomelic dysplasia, a disorder that results in hipbones that are abnormally narrow and lack lateral flaring.

Similarly, mutations in PTH1R cause abnormally narrow hipbones and other skeletal diseases.

The authors suggest that these changes began with reorientation of growth plates around the time that our ancestors branched from the African apes, estimated to be between 5 million and 8 million years ago.

They believe that the pelvis remained a hotspot of evolutionary change for millions of years.

As brains grew bigger, the pelvis came under another selective pressure known as the “obstetrical dilemma” — the tradeoff between a narrow pelvis (advantageous for efficient locomotion) and a wide one (facilitating the birth of big-brained babies).

They suggest that the delayed ossification probably occurred in the last 2 million years.

The oldest pelvis in the fossil record is the 4.4-million-year-old Ardipithecus from Ethiopia (a hybrid of an upright walker and tree climber with a grasping toe), and it shows hints of humanlike features in the pelvis.

The famous 3.2-million-year-old Lucy skeleton, also from Ethiopia, includes a pelvis that shows further development of bipedal traits such as flaring hip blades for bipedal muscles.

Capellini believes the new study should prompt scientists to rethink some basic assumptions about human evolution.

“All fossil hominids from that point on were growing the pelvis differently from any other primate that came before,” said Capellini. “Brain size increases that happen later should not be interpreted in a model of growth like chimpanzee and other primates. The model should be what happens in humans and hominins. The later growth of fetal head size occurred against the backdrop of a new way of new way of making the pelvis.”


This research was funded in part by the National Institutes of Health.

When global trade is about more than money

David Yang.

David Y. Yang.

Niles Singer/Harvard Staff Photographer

Nation & World

When global trade is about more than money

Economist’s new tool looks at how China is more effective than U.S. in exerting political power through import, export controls

Christy DeSmith

Harvard Staff Writer

6 min read

International trade can yield far more than imports and exports. According to David Y. Yang, Yvonne P. L. Lui Professor of Economics, trade can be used to wield political power.

Yang watched as China imposed trade restrictions on competitor Taiwan following a 2022 visit to the island by U.S. Speaker of the House Nancy Pelosi. A decade earlier, the arrest of a Chinese fishing boat captain in contested waters culminated with Beijing blocking exports to Japan of certain rare earth minerals, critical components for wind turbines and electric vehicles.

“Another example is China banning the import of Norwegian salmon for nearly a decade as punishment for awarding a Nobel Prize to the dissident Liu Xiaobo,” said Yang, a political economist with expertise in the East Asian superpower.

His latest working paper, co-authored with Princeton’s Ernest Liu, presents a framework for measuring how much geopolitical muscle a country can flex by threatening trade disruptions. Today, the economists find, China exerts outsized influence over trading partners while the United States has less power than expected relative to the size of its economy

“With the arrival of new data sources and empirical tools, this is something we can now study very rigorously,” Yang emphasized. “Conducting these objective, data-driven analyses feels all the more urgent in today’s global geopolitical climate.”

Their model specifically tests a set of predictions made by mid-20th-century Harvard professor Albert O. Hirschman, a German-born Jew who fled Europe during World War II. His book “National Power and the Structure of Foreign Trade” (1945) offered a theoretical account of how countries might use trade to assert geopolitical dominance.

“Hirschman viewed the issue through a positive lens,” Yang noted. “Rather than bombing each other, countries could just fight economic wars to achieve the same goals.”

Hirschman saw that trade asymmetries could be exploited. But deficits and surpluses weren’t the only relevant variables. Also important was how crucial and easily replaced the goods in question were. Halting the flow of crude oil tends to pack a far bigger punch than withholding textile exports.

“If one country becomes overly reliant on another, it might be economically efficient,” Yang explained. “But it can leave the first country vulnerable by exposing it to unfavorable power dynamics.”

Hirschman’s ideas seemed less relevant in the post-war years, with the widespread desire for increased free trade. But the book feels fresh again today, said Yang, who recently assigned it in an undergraduate economics course.

“I asked students to read the first few chapters and guess when it was written,” he recalled. “Many guessed it was last year.”

Yang and Liu set about formalizing Hirschman’s vision about three years ago, long before the current suite of aggressive U.S. tariffs. “A lot of the anecdotal examples that motivated our work came from China,” Yang said.

Indeed, their model shows China’s trade power rising over the past two decades as it turned key industries into political instruments. Chemical products, medical instruments, and electrical equipment emerged as especially potent. The country’s trade power proved larger than expected given the size of its GDP, second to the world’s largest economy by many trillions of dollars.

U.S. trading power over China declines

Graphic from Yang study that shows: USA’s power over other countries over time.

This figure plots the directed power (in all sectors) between the U.S. and a country for each year.

Credit: Ernest Liu and David Y. Yang

“In the early 2000s, the U.S. was able to exert more absolute power over China through trade disruptions,” said Yang, noting that findings on the U.S. were relatively stable over the 20-year period they studied.

“But things have quickly flipped,” he continued. “China now has more trade power over the U.S. and, at the moment, can exert positive power over any other entity in the world.”

China’s trading power on the rise

Graphics from Yang's study. China’s power over other countries over time.

This figure plots the directed power (in all sectors) between China and a country for each year.

Credit: Ernest Liu and David Y. Yang

Yang and Liu also tested a pair of predictions concerning the consequences of unbalanced power. First, the economists tapped a database of millions of events involving the governments of two trading partners, confirming that negotiations and other forms of engagement increase with the asymmetries Hirschman described.

Another dataset, sourced from international opinion polls, was used to gauge bilateral geopolitical alignment over time and to verify a second predicted consequence. Yang and Liu found national leaders strategizing to build and bank trade power — by limiting imports, for example — when relations with a trading partner turned frosty due to political turnover.

“While many of the examples we give in the paper are from China, we hope to show this is a more general phenomenon,” Yang said. “Trade is a source of power any country can access.”

The paper is threaded with other insights.

“If the European Union acted as one country, it would actually be able to exercise positive power over China,” Yang said. “But individual EU members all have negative power over China. I don’t think it’s a coincidence that China typically engages with EU members bilaterally.”

What’s more, the U.S. and China are weaker against each another. The paper features a pair of maps illustrating their trade power over the rest of the world from 2001 to 2021. U.S. strength appears to peak in North America, while China’s is anchored in the Asia Pacific region.

“In terms of global power dynamics,” Yang observed, “medium-sized countries are very much the ones that get bullied.”

The results underscore a recent shift in the global trade order. For half a century following World War II, Yang said, the largest economies imported and exported with hopes of maximizing efficiency for the benefit of domestic businesses and consumers.

“What’s worrisome is that we’re starting to see the opposite,” he offered. “Trade is being restructured to take power into consideration. But in contrast with the positive-sum nature of efficiency-enhancing trade as countries produce according to their comparative advantage, power consideration in trade is negative-sum, hurting welfare on both sides.

“As we begin to painfully realize,” Yang added, “it may not be geopolitically feasible to implement efficient trade.”

Analysts highlight a school-sized gap in mental health screening

Health

Missing teens where they are

Analysts highlight a school-sized gap in mental health screening

Alvin Powell

Harvard Staff Writer

4 min read
Hao Yu.

Hao Yu.

Stephanie Mitchell/Harvard Staff Photographer

As anxiety and depression persist at alarming rates among U.S. teens, less than a third of the nation’s public schools conduct mental health screenings, and a significant number of those that do say it’s hard to meet students’ needs, according to a new survey of principals.

With staffing that includes counselors and nurses, public schools are uniquely positioned to help address the youth mental health crisis declared in 2021 by the U.S. surgeon general, according to Harvard Medical School’s Hao Yu, a co-author of the study.

“Child mental health is a severe public health issue in this country,” he said. “Even before COVID, about a quarter of children had different degrees of mental health problems, and during the pandemic the problem just got worse.”

The study, published last month in JAMA Network Open, is the first since 2016 to poll public school principals on children’s mental health, said Yu, an associate professor of population medicine. The intervening years have included COVID-related disruptions, growing worries about screen time, and a surge of artificial intelligence in everyday life, he noted.

$1B Cut from previously approved federal funding for school mental health support

One positive finding from the survey, which was funded with a grant from the National Institute of Mental Health, is that the percentage of U.S. public schools that screen for mental health issues has risen significantly in the past nine years, albeit from just 13 percent to 30.5 percent. The survey asked 1,019 principals three questions: Do you screen for student mental health issues? What steps are taken for students identified with anxiety or depression, two of the most common youth mental health issues? And how easy or hard it is to find adequate mental health care for students who need it?

The responses show that the most common step taken for students struggling with anxiety or depression is to notify parents — almost 80 percent of schools did that. Seventy-two percent offer in-person treatment, while about half refer to an outside mental health provider. Less than 20 percent offer telehealth treatment.

Responses to the final question highlight the challenge facing those seeking to address the problem, with 41 percent describing the task of getting care as “hard” or “very hard,” a result that Yu said, while concerning, isn’t surprising given the nationwide shortage of mental health providers.

The survey, conducted with colleagues from the Medical School, the nonpartisan research organization RAND, Brigham and Women’s Hospital, the University of Pittsburgh, the Harvard Pilgrim Health Care Institute, and Brown University, also showed that school-based screening programs are concentrated in larger schools, with 450 students or more, and in districts with larger populations of racial and ethnic minority students.

Helping young people overcome mental health challenges is a multistep process, Yu said.

“We need to make child psychiatry an attractive profession and we need to train more mid-level providers — social workers, school nurses, and counselors — because those middle-level providers play an important gatekeeper role, helping identify children with mental health problems and helping children and their families get into the healthcare system,” he said.

It’s also important, Yu said, to get policy right at all levels of government. For example, he said, even though it’s clear that meeting the challenge will require more resources, the federal government recently slashed $1 billion in previously approved school mental health funding. A potentially positive development, he said, is the nationwide trend toward restrictions on smartphone use.

“I don’t think any other institution can replace the schools in identifying and treating child mental health problems,” Yu said. “If mental health problems are treated, their severity can be greatly reduced. Mental health problems not treated in childhood can have a long-lasting effect into adulthood. That’s not an optimal situation for our society.”

ETH Zurich launches pioneering construction research project

The HIL building on the Hönggerberg campus is set to become a living lab. Now in need of renovation, the building will be remodelled and extended, with completion pencilled in for 2035. Professorships at ETH Zurich will engage with the project directly to research techniques and designs. Their aim is to advance sustainable redevelopment and retrofitting methods.

NUS Team Bumblebee crowned champion of RoboSub 2025

Following a one-year hiatus, NUS Team Bumblebee made a strong return to RoboSub 2025, reclaiming the championship title. The team had previously clinched its first championship in 2022 and successfully retained the top place in 2023.

Held from 11 to 17 August 2025 at the William Woollett Jr Aquatics Center in the City of Irvine in California, USA, RoboSub is a global competition that challenges teams to tackle real-world underwater robotics problems – from oceanographic exploration and mapping to object detection and manipulation. This year’s event brought together 58 teams from across the globe.

Team Bumblebee took a break from last year’s edition of RoboSub to focus on the Maritime RobotX Challenge, another international competition that advances autonomous robotic systems in the maritime domain. This year, the NUS team returned to RoboSub 2025 amid a record number of contenders, including Duke University, San Diego State University and Arizona State University – all of which were also finalists in this year’s competition.

After a week of intense competition – featuring technical presentations to judges and multiple rounds of pool testing - Team Bumblebee emerged as the overall champion and also swept awards for design documentation, namely Top Website, Top Video, Top Report and Top Assessment.

Team Lead Leong Deng Jun, Year 4 Computer Engineering undergraduate, reflected on the team’s effort, “I am very proud of what the team has achieved at this RoboSub. This year’s competition was especially challenging with a record number of teams participating. Many teams came better prepared, which meant we had limited testing time in the competition arena. Despite some unexpected setbacks and hardware issues, every member of our team stepped up and contributed to this victory.”

Based at the College of Design and Engineering in NUS, Team Bumblebee is a student-run multidisciplinary project comprising 55 undergraduates from Mechanical, Computer, and Electrical Engineering as well as NUS School of Computing, NUS Business School, and NUS College of Humanities and Sciences.

NUS Business School celebrates 60 years of shaping tomorrow’s leaders

The NUS Business School marked its 60th anniversary with a birthday bash on 14 August 2025, bringing together current students, staff and past and present leaders to celebrate its journey and look to the future.

Founded in 1965 as the Department of Business Administration, it began with just 21 students recruited from the Faculty of Arts by its first leader, Dr Andrew Zecha, who personally persuaded undergraduates to join the fledgling department. It went on to lay the foundation for business education in Singapore, setting up Master of Business Administration (MBA) and Executive MBA (EMBA) programmes in partnership with schools in China and the US and earning accreditations that helped it transform into a global business school.

Today, the Business School boasts more than 6,000 students across the undergraduate and postgraduate levels and an alumni network of more than 50,000. It is the top business school in Asia and ranks among the top ten globally.

NUS President Tan Eng Chye, NUS Business School co-founder Mr Tan Yam Pin, former deans Emeritus Professor Lee Soo Ann, Professor Wee Chow Hou, Professor Hum Sin Hoon and Professor Kulwant Singh, and Deputy Dean Associate Professor Jumana Zahalka joined Distinguished Professor Andrew Rose, Dean of the Business School, for a cake-cutting ceremony to open the event.

In his opening remarks, Prof Rose thanked the School’s founders and leaders for their contributions over the past six decades, as well as the faculty and staff whose education and research work form the core of the School’s reputation. “They are the reason why we’re attracting higher and higher quality students, and our reputation continues to grow,” he said.

The event included a fully subscribed masterclass on the future of sustainability in Asia, delivered by Professor Lawrence Loh, Director of the Centre for Governance and Sustainability, and an activity-packed carnival. Attendees enjoyed traditional snacks, arcade games, art booths and a dunk tank where their attempts to dunk Prof Rose and other faculty members raised money for a good cause.

Leading in Singapore and Asia

For former Business School leaders Mr Tan, Prof Lee and Prof Wee, the 60th anniversary milestone was an opportunity to reflect on the stark changes that have taken place since their time. The School’s enrolment growth is especially impressive for Mr Tan, who was there when the initial cohort of 21 was enrolled. In comparison, the undergraduate intake in recent years has been around 4,000 students per year.

Mr Tan reflected: “In 1963, when we first conceived the concept of starting a department of business administration, it seemed like a baby step forward to me. Now looking back, it was actually a giant leap, like (Neil) Armstrong said.”

Back then, business subjects like accountancy were considered more suitable for polytechnic diplomas, rather than degrees, Prof Lee recalled. However, things changed when Singapore gained independence in 1965 and needed to train its own finance professionals.

Prof Lee said, “The teaching of accounting and business administration became essential for businesses to survive, because the accountants had returned to England. By teaching accounting here, we upgraded the capabilities of Singapore and the economy.”

He looks forward to continued innovations in how the School prepares students for the changing business environment, such as by offering more double degree programmes and leveraging new technologies like Generative AI to help students learn faster and more broadly.

Prof Wee was pleased to see that several initiatives he introduced during his tenure from 1990–1999 have become an integral part of the School’s offerings and shaped its global focus.

For instance, he was an early advocate of the modular system over the traditional year-long curriculum, as it would allow students to retake individual modules as needed and embark on exchange programmes more flexibly. The same system has now been implemented across the entire University.

To strengthen the School’s influence as an authority on Asian business, he encouraged faculty members to initiate collaborations with authors of well-known business textbooks to develop new editions with Asian case studies and contexts. In addition, Chinese MBA and EMBA programmes were launched under his leadership, offering the first such programmes outside mainland China.

A project that he would have liked to execute before he stepped down was the creation of a bilingual master’s programme, which he believes would be critical to the Business School’s mission of providing business education with an Asian lens. He still hopes that this vision will eventually come to fruition in the School’s next chapter.

“Singapore as a nation has survived because the West wants to listen to our views about China, and the Chinese come to us to know about the West,” Prof Wee said. “As a society, our strength lies in playing that bridging role. It’s important that the government is trying to cultivate bilingualism, and our universities should complement this.”

‘We’re so happy to have you here’

First-year students and their families in Harvard Yard during move-in day.

First-year students and their families criss-cross the Yard on move-in day.

Stephanie Mitchell/Harvard Staff Photographer

Campus & Community

‘We’re so happy to have you here’

Eileen O'Grady

Harvard Staff Writer

5 min read

Yard brims with voices and motion, excitement and nerves, sweat and tears on move-in day

Ryan Zhou was busy moving items into his Weld Hall dorm room on Tuesday with the help of his parents and his new suitemates, Kelvin Cheung and Ronan Pell, when there was a knock on the door.

“Hi, Ryan, my name’s Hopi,” said Hopi Hoekstra, the Edgerley Family Dean of the FAS, coming into the room with some bags she had helped Zhou’s brother carry up from the car downstairs. “Welcome, we’re so happy to have you here.”

“I’m excited,” said Zhou, as he stood in the suite’s common area piled high with duffels, boxes, and bedding. “I’m excited to get started with meeting new people, making new friends, excited for all the professors and the classes.”

Harvard Yard came alive Tuesday morning as first-year students and their families unloaded cars and carried bags and boxes to the dorms in preparation for the start of their time at Harvard.

Ronan Pell and Kelvin Cheung in their dorm room.

Dean Hopi Hoekstra chats with first-years Ronan Pell (left) and Kelvin Cheung as they settle in their new home in Weld Hall.

Veasey Conway/Harvard Staff Photographer

Zhou and his family drove up from their home in Ellicott City, Maryland, a few days beforehand. His father, Ning Zhou, said he’s feeling positive about the road ahead.

“I am just extremely proud of him and his years of effort,” he said. “This is his dream school. A lot of Harvard graduates told him the experience was transformative for them, so I hope that he will have a similar experience.”

“I just feel happy for him,” Zhou’s mother, Jun Gui, added. “He found the place he wants to go. I haven’t shed a tear yet.”

Anne Yahanda, Alan Garber, Hopi Hoekstra, and David Deming.

Welcoming the new students were President Alan Garber (second from left), joined by his wife, Anne Yahanda (far left); Faculty of Arts and Sciences Dean Hopi Hoekstra; and Dean of Harvard College David Deming.

Stephanie Mitchell/Harvard Staff Photographer

Cate Frerichs and mother Desiree Luccio in Harvard Yard.

First-year Cate Frerichs with her mother, Desiree Luccio.

Stephanie Mitchell/Harvard Staff Photographer

First-year Jose Garcia carries a box up the stairs at Hollis Hall.

First-year Jose Garcia helps hoist a box up a Hollis Hall stairway.

Veasey Conway/Harvard Staff Photographer

Harvard senior Lexi Triantis blows bubbles outside Hollis Hall.

Senior Lexi Triantis takes a bubble break.

Veasey Conway/Harvard Staff Photographer

Staging area for moving boxes outside the Science Center at Harvard.

Boxes collect in a staging area outside the Science Center.

Veasey Conway/Harvard Staff Photographer

Alex Heuss ’26 wears a wears a T-shirt with all the first-year Houses.

A T-shirt decorated with emblems of Harvard’s first-year Houses.

Stephanie Mitchell/Harvard Staff Photographer

By Johnston Gate, a group of upper-level students from the Crimson Key Society, holding a “Welcome to Harvard” sign, sang and danced along to Nicki Minaj and Bruno Mars songs, waving to the cars that pulled in. Outside each dorm, upper-level Peer Advising Fellows, dressed in red T-shirts, greeted new students and helped show them to their rooms

“What makes move-in day so special?” Hoekstra said. “Three things: Experiencing the energy that our returning students bring to welcoming new first-years to the Harvard community. Meeting proud, and sometimes nervous, parents who have traveled from around the globe. Watching new friendships form among roommates meeting for the first time — ones that often not only last for four years at Harvard but across lifetimes.”

“A lot of Harvard graduates told him the experience was transformative for them, so I hope that he will have a similar experience.”

Ning Zhou, about son Ryan

Leila Holland and her parents, Keisha and Jaime Holland, from Long Beach, California, took it all in as they paused outside the key distribution tent in the center of the green. Leila, who had just picked up her ID and register book, said she was looking forward to seeing her Hollis Hall room.

“I’m a little nervous, but I’m really excited to be part of a new community,” she said.

Jaime Holland said he knows this will be a time of changes.

“Just the discovery process, as she figures out what she wants to do and the kind of person she wants to be,” he said. “This is a great place to do it.”

First-year students and families hoist boxes in Harvard Yard.
Veasey Conway/Harvard Staff Photographer

David Deming, Danoff Dean of Harvard College, made his way between the parked cars, cheerfully accepting a black rolling suitcase and a pink wall sign from a family’s car, and leading the way to Weld.

“Move-in day is one of my very favorite days of the year at Harvard,” Deming said. “There is so much positive energy and excitement and anticipation. I feel that, too, in my first year as dean. It’s great to be able to help new students move in and feel the positive energy with them.”

Outside Grays Hall, Harvard President Alan Garber and his wife, Anne Yahanda, chatted with parents, swapping stories and recalling what it felt like to drop their own children at college.

 “For everyone here, all the hard work, everything they’ve done — it’s just such an accomplishment and dream.”

Desiree Luccio

For most parents, move-in day prompts complicated emotions.

Desiree Luccio couldn’t help tearing up as she spoke about moving her daughter, Cate Frerichs, into Wigglesworth Hall. The two wore matching red Harvard sweatshirts.

“I didn’t cry at graduation, but now it’s hitting me,” Luccio said. “For everyone here, all the hard work, everything they’ve done — it’s just such an accomplishment and dream.”

For her part, Frerichs was particularly looking forward to being a student athlete — she will be a coxswain on the men’s heavyweight rowing team.

“I guess I’m nervous and excited,” Frerichs said. “I’ve met my roommates, and I’m excited to start living with them and to meet everyone.”

Global concerns rising about erosion of academic freedom

Gagged face on pile of text books.

Jonathan McHugh/Ikon Images

Nation & World

Global concerns rising about erosion of academic freedom

New paper suggests threats are more widespread, less obvious than some might think

Christina Pazzanese

Harvard Staff Writer

8 min read

Political and social changes in the U.S. and other Western democracies in the 21st century have triggered growing concerns about possible erosion of academic freedom.

In the past, colleges and universities largely decided whom to admit and hire, what to teach, and which research to support. Increasingly, those prerogatives are being challenged.

In a new working paper, Pippa Norris, the Paul F. McGuire Lecturer in Comparative Politics at Harvard Kennedy School, looked at academic freedom and found it faces two very different but dangerous threats. In this edited conversation, Norris discusses the lasting effects these threats can have on institutions and scholars.


How is academic freedom defined here and how is it being weakened?

Traditional claims of academic freedom suggest that as a profession requiring specialist skills and training like lawyers or physicians, universities and colleges should be run collectively as self-governing bodies.

Thus, on the basis of their knowledge and expertise in their discipline and subfield, scholars should decide which colleagues to hire and promote, what should be taught in the classroom curriculum, which students should be selected and how they should be assessed, and what research should be funded and published. 

Constraints on this process from outside authorities no matter how well-meaning can be regarded as problematic for the pursuit of knowledge.

Encroachments on academic freedom can arise for many different reasons. For example, the criteria used for state funding of public institutions of higher education commonly prioritize certain types of research programs over others. Personnel policies, determined by laws, set limits on hiring and firing practices in any organization. Donors also prioritize support for certain initiatives. Academic disciplines favor particular methodological techniques and analytical approaches. And so on.

Therefore, even in the most liberal societies, academic institutions and individual scholars are never totally autonomous, especially if colleges are publicly funded.

But nevertheless, the classical argument is that a large part of university and college decision-making processes, and how they work, should ideally be internally determined, by processes of scholarly peer review, not externally controlled, by educational authorities in government.

You say academic freedom faces threats on two fronts, external and internal. Can you explain?

Much of the human rights community has been concerned primarily about external threats to academic freedom. Hence, international agencies like UNESCO, Amnesty International, and Scholars at Risk, and domestic organizations like the American Association of University Professors, are always critical of government constraints on higher education like limits to free speech and the persecution of academic dissidents, particularly in the most repressive authoritarian societies.

In America, much recent concern has focused on states such as Florida and Texas, and the way in which lawmakers have intervened in appointments to the board of governors or changed the curriculum through legislation.

But, in fact, the government has always played a role, even in private universities. Think about sex discrimination, think about Title IX, think about all the ways in which we’ve legislated to try to improve, for example, diversity. That wasn’t accidental. That was a liberal attempt to try to make universities more inclusive and have a wider range of people coming in through social mobility.

So, we can’t think this all just happened because of Trump. It hasn’t. It’s a much larger process, and it’s not simply America. In all democracies, official bodies in the federal or state government, whichever party is in power, generally regulate employment conditions, university accreditation, curriculum standards, student grants and loans, and so on and so forth, and so it’s going to do that for colleges and universities in the U.S., as well.

Academic freedom is also at risk from internal processes within higher education, especially informal norms and values embedded in academic culture. Those can exist in any organization.

In academic life, surveys of academics since the 1950s have commonly documented a general liberal bias (broadly defined) amongst the majority of scholars, where the proportion of conservatives has usually been a heterodox minority.

This bias comes from a variety of different sources: It’s partly self-selection, a matter of who chooses to go into academic life versus going into the private sector careers. But is also internally reinforced — a matter of who gets selected, appointed, promoted, and who gets research grants and publications. There are lots of different ways people have to conform to the social norms of the workplace and within their discipline.

Those cultural norms are tacit. The problem is that if you don’t follow the norms, there may be a financial penalty — you don’t get promoted, or you don’t get that extra step in your grant and your award.

But they may also be just informal pressures of collegiality, friendship, and social networks. People don’t want to offend so they seek to fit in with their colleagues, department, or institution. As a result, heterodox minorities may well decide to “self-censor,” to decline from speaking up in dissent with the prevailing community.

The result is to accentuate the liberal bias, since criticisms of prevailing orthodoxies are not even expressed or heard in debate. Thus, many holding orthodox views shared by the majority in departmental meetings, appointment boards, or classroom seminars may believe that there is discussion open to all viewpoints, but silence should not be taken as tacit agreement if minority dissidents silently feel unable to speak up.

The mere perception that academic freedom is in decline increases people’s tendency to self-censor, according to the paper. Why is that?

Liberals often feel that there is no self-censorship, and there is no problem in academe, that everybody is free to speak their opinion, and that they welcome diversity in the classroom, they welcome diversity in the department, and things like that.

The problem is that if you’re in a minority and in particular right now the conservative minority, then you feel you can’t immediately speak up on a number of issues, which might offend your colleagues or might have material problems for your career.

If you’re a student and you have a heterodox view, you might feel that you won’t be popular, you won’t be invited to the parties, and you won’t have all those social networks which are a really important part of why people go to college. So, there’s this informal penalty.

Liberals don’t sense it because when they are discussing things, they think there are a variety of different views, but they may well be antithetical. They don’t even hear the criticisms of their views because those who are in the minority don’t want to speak up.

The minority can be defined in lots of different ways. It’s not simply one ideology. There are multiple viewpoints in any subject discipline. But there’s a particular way of looking at these things within a discipline, which sets the agenda, which also affects textbooks and affects the classroom, and, in fact, affects the informal culture.

You found that endorsements of strong pro-academic freedom values predict the willingness of scholars to speak out even when it differs from popular opinion. What did you mean?

Think about the people who are standing up for Harvard right now or standing up for any institution or any other unpopular view. A strong liberal is somebody who follows the John Stuart Mill argument, which is that the only way you know your argument is to know the opponent’s and to be able to act like a prosecutor in which you can put the argument on both sides. I try to use this as a pedagogy in my own classes.

People who believe in academic freedom are largely in the more liberal democracies, the Western democracies of the world. In many countries, they don’t have those luxuries.

In China, you’re not going to be speaking up against the Communist Party. It’s about what can you say and when can you say it — being sensitive to the silences and what generates the silence. And how do you ask a question, which is not going to belittle somebody and is not going to make them feel small, but you’re taking them seriously when you don’t agree with them.

The most important finding from my research evidence is that if you’re working and living in a country with more institutional constraints and less legal freedom, you’re also more likely to suppress your own views.

You can think of it as an embedded model like a Russian nesting doll. The internal group is limiting your willingness to speak up; the external is about the punishments you face if you do speak up. The two interact, obviously, but the informal norms are the subtlest things, which will keep you quiet.

J-PAL North America launches Initiative for Effective US Crime Policy

Crime and public safety are among the most pressing concerns across communities in the United States. Violence fractures lives and carries staggering costs; the economic burden of gun violence alone tops $100 billion each yearMore than 5 million people live under supervision through incarceration, probation, or parole, while countless more experience the collateral consequences of arrests and criminal charges. Achieving lasting public safety requires confronting both crime itself and the collateral consequences of the U.S. criminal justice system.

To help meet these dual challenges, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — with generous grant support from Arnold Ventures, launched the Initiative for Effective US Crime Policy (IECP). This initiative will generate rigorous evidence on strategies to make communities safer, reduce discrimination, and improve outcomes at every stage of the criminal justice process.

“There are a lot of open questions. We desperately need to be trying new solutions, but we need to try them in a way that enables us to learn whether they work,” notes Jennifer Doleac, executive vice president of criminal justice at Arnold Ventures. “There is a path forward for us to step up and make a concerted effort to make sure that we are being very strategic in how we spend our time and where we are directing our resources.”

Building on more than a decade of pioneering randomized evaluations, J‑PAL North America’s IECP will fund rigorous new studies in the criminal justice space, offer hands-on technical assistance, and connect researchers with practitioners. By reviewing both established and emerging evidence, the initiative will also help decision-makers focus resources on interventions that demonstrably improve public safety.

“Through this initiative, we aim to expand the use of rigorous existing evidence and help scale interventions that are proven to improve outcomes, from prevention to reintegration,” says Sara Heller, associate professor of economics at the University of Michigan and co-chair of IECP. “At the same time, IECP seeks to fill critical gaps in the evidence base by supporting new research on what works to improve the criminal justice system in the United States.”

A platform for collaboration

In June at the MIT Museum, IECP convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration. Speakers explored the structural drivers of violence, effective pathways for translating evidence into policy, and strategies for establishing successful partnerships between researchers and practitioners.

Speakers also reflected on the value and limits of existing evidence and discussed areas in which randomized evaluations can help address the most pressing questions. Randomized evaluations have contributed powerful insights in areas such as summer youth employment programsreminders to increase court appearances, hot-spot policing, and the use of body-worn cameras. Yet many important questions remain unanswered.

“We know randomized evaluations can answer hard policy questions, but only if we ask the right questions, with the right lens, at the right scale,” says Amanda Agan, associate professor at Cornell University and co-chair of IECP. “This convening was a call to push further: to design studies that are not only rigorous, but also relevant to the lived experiences of communities and the structural forces that shape public safety."

How to take part

Are you a practitioner with a promising idea in the criminal justice space, a policymaker planning a new program, a researcher developing a real-world intervention, or a funder investing in rigorous empirical evidence? IECP supports research partnerships to advance scalable, evidence-based solutions in the criminal legal system by funding impact evaluations, connecting researchers and practitioners, and supporting the design of randomized evaluations and the dissemination of evidence.   

To learn more about this initiative, please contact iecp@povertyactionlab.org.

© Photo courtesy of J-PAL North America.

The new Initiative for Effective US Crime Policy recently convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration.

J-PAL North America launches Initiative for Effective US Crime Policy

Crime and public safety are among the most pressing concerns across communities in the United States. Violence fractures lives and carries staggering costs; the economic burden of gun violence alone tops $100 billion each yearMore than 5 million people live under supervision through incarceration, probation, or parole, while countless more experience the collateral consequences of arrests and criminal charges. Achieving lasting public safety requires confronting both crime itself and the collateral consequences of the U.S. criminal justice system.

To help meet these dual challenges, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — with generous grant support from Arnold Ventures, launched the Initiative for Effective US Crime Policy (IECP). This initiative will generate rigorous evidence on strategies to make communities safer, reduce discrimination, and improve outcomes at every stage of the criminal justice process.

“There are a lot of open questions. We desperately need to be trying new solutions, but we need to try them in a way that enables us to learn whether they work,” notes Jennifer Doleac, executive vice president of criminal justice at Arnold Ventures. “There is a path forward for us to step up and make a concerted effort to make sure that we are being very strategic in how we spend our time and where we are directing our resources.”

Building on more than a decade of pioneering randomized evaluations, J‑PAL North America’s IECP will fund rigorous new studies in the criminal justice space, offer hands-on technical assistance, and connect researchers with practitioners. By reviewing both established and emerging evidence, the initiative will also help decision-makers focus resources on interventions that demonstrably improve public safety.

“Through this initiative, we aim to expand the use of rigorous existing evidence and help scale interventions that are proven to improve outcomes, from prevention to reintegration,” says Sara Heller, associate professor of economics at the University of Michigan and co-chair of IECP. “At the same time, IECP seeks to fill critical gaps in the evidence base by supporting new research on what works to improve the criminal justice system in the United States.”

A platform for collaboration

In June at the MIT Museum, IECP convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration. Speakers explored the structural drivers of violence, effective pathways for translating evidence into policy, and strategies for establishing successful partnerships between researchers and practitioners.

Speakers also reflected on the value and limits of existing evidence and discussed areas in which randomized evaluations can help address the most pressing questions. Randomized evaluations have contributed powerful insights in areas such as summer youth employment programsreminders to increase court appearances, hot-spot policing, and the use of body-worn cameras. Yet many important questions remain unanswered.

“We know randomized evaluations can answer hard policy questions, but only if we ask the right questions, with the right lens, at the right scale,” says Amanda Agan, associate professor at Cornell University and co-chair of IECP. “This convening was a call to push further: to design studies that are not only rigorous, but also relevant to the lived experiences of communities and the structural forces that shape public safety."

How to take part

Are you a practitioner with a promising idea in the criminal justice space, a policymaker planning a new program, a researcher developing a real-world intervention, or a funder investing in rigorous empirical evidence? IECP supports research partnerships to advance scalable, evidence-based solutions in the criminal legal system by funding impact evaluations, connecting researchers and practitioners, and supporting the design of randomized evaluations and the dissemination of evidence.   

To learn more about this initiative, please contact iecp@povertyactionlab.org.

© Photo courtesy of J-PAL North America.

The new Initiative for Effective US Crime Policy recently convened over 70 researchers, policymakers, and practitioners to identify research priorities and catalyze collaboration.

Funny or failure? It’s a fine line.

Will Burke dressed as "Crab Jesus" on Jimmy Kimmel Live!

Will Burke on “Jimmy Kimmel Live!”

Photos by Randy Holmes/ABC

Arts & Culture

Funny or failure? It’s a fine line.

‘Jimmy Kimmel Live!’ writer Will Burke on taking risks in comedy and why getting laughs is worth near-constant rejection

Anna Lamb

Harvard Staff Writer

7 min read

Tightrope series

A series exploring how risk shapes our decisions.

Imagine walking a tightrope. Your goal is to get to the other side without falling. Below you — certain death. Well, maybe not death. Maybe there’s a net to catch you, but it’s not a very soft net, and falling into it will certainly not feel good. That, says Will Burke, alumnus of Harvard College and nearly two-decade veteran staff writer, now director, for “Jimmy Kimmel Live!,” is what trying to be funny is like.

“The second you walk out on stage or you start to tell a joke, you’re walking a tightrope,” Burke said. “You’re betting on your timing, your point of view, and sometimes you’re putting your dignity on the line in the hopes that people will laugh.”

Making people laugh, both on stage and off, has been a lifelong pursuit for Burke ’99. His comedy career started as a class clown in the hallways of the New England prep schools where his father was a teacher, and continued on stage at Harvard with the improv group On Thin Ice and the Shakespeare troupe he helped found. Then it blossomed in Los Angeles, practicing with improv groups like The Groundlings and auditioning for acting gigs.

And while a career spent trying to be funny sounds like a dream for many, Burke said it’s actually been quite risky. There’s the risk of putting yourself out there creatively, the risk of crossing a line with a joke, and then, of course, the risk of not “making it” as a funny guy full-time.

Will Burke holds a "laugh" sign as Zach Galifianakis is interviewed by Jimmy Kimmel.
Burke (from left) on stage with Zach Galifianakis and Kimmel.

“The biggest risk was taking my Harvard diploma in one hand and trading the ivory towers of Harvard for the dive bars of Hollywood,” Burke said. “I was turning my back on the pedigree and the connections.”

Burke knows a Harvard degree can get you far. But, he said, when he moved to Los Angeles after graduation in 1999, he also knew it wouldn’t get him on TV. He’d have to do the same open mics, auditions, and acting classes the rest of the aspiring comedians in LA were doing. And in the meantime, he’d be a bartender slash tutor slash cater-waiter slash comedian.

“I suppose in some ways, you could say for a Harvard grad it’s less risky to go try to do this thing, because if it doesn’t work out you’ve still got a Harvard diploma, and some doors will open to you in a different field. But once you’re 10 years in, 15 years in, starting over in a totally different career is risky too,” he said.

And 10 years, Burke said, would be all he gave it before accepting defeat and going back to the East Coast.

“As an actor, it took me, like, 150 auditions before I booked my first thing,” Burke said. “And at this point I had become a little jaded. I was like, ‘This is so annoying. I don’t even want this commercial. This is a terrible Taco Bell ad, who cares?’ And when you don’t care, then they’re like, ‘Oh, that guy’s great. He doesn’t care. He doesn’t need this job.’ They feel it. And so that taught me a lot.”

“You’re betting on your timing, your point of view, and sometimes you’re putting your dignity on the line in the hopes that people will laugh.”

Besides booking some commercials, and some small roles on TV, after six years of auditioning and being rejected, Burke was offered a job back in Boston, working for a bank. He had a baby on the way, rising rent, and an income being stitched together through various odd jobs.

“I essentially, verbally accepted a job — I went down to HR and they photocopied my driver’s license and gave me the 401K package, what it would look like, and that whole thing. And I was like, ‘This feels like the most responsible thing to do. I have mouths to feed.’ And I could still scratch the itch in comedy clubs in Boston on the weekends, if I wanted. I kept trying to give myself a pep talk that I felt good about this — having a steady paycheck and a guaranteed career.”

Fate, said Burke, had other plans.

“Shortly thereafter, I flew back to LA and I got offered a job writing for ‘Jimmy Kimmel Live!’ And thank God I did. That was 19 years ago, and I’ve been there ever since.”

Since landing “Kimmel,” Burke said every day on the job, trying to be funny, is a risk.

“There were stressful days where I was convinced I was getting fired,” he said. “You’d see other writers get fired. I was like, ‘Oh, he’s not pitching stuff. Jimmy doesn’t like his stuff or her stuff,’ and then the next thing you know, that guy’s desk is empty. That’s real-world risk. There’s a lot of pressure to continue to produce stuff that lands and you’re trying to hit this moving target — the stuff that was making Jimmy laugh last week, he’s over it. Now that’s played out. Humor is like that.”

“It’s a dream job. It’s what I envisioned doing when I was a little kid, and I’d see ‘Saturday Night Live,’ or even ‘The Muppet Show.’ The idea of, there’s a show going on, and there’s insanity backstage, and there’s a Stormtrooper and free chickens and Gonzo and things are crashing and the show must go on.”

Asked about how he deals with near-constant rejection in the office, Burke said your feelings are always on the line.

“It’s impossible to not take things personally,” he said. But he added, there’s a trick to avoid getting too hurt.

“You walk into the room convinced that you are the absolute only person who could ever play this role, and you do your audition, and as soon as they say, ‘Thank you so much,’ you walk out of that room convinced you will never hear from them again and that you didn’t get it, so that you’re not disappointed. And it’s this weird game you play with yourself. Extrapolating that to the writers’ room as you’re pitching a joke, you stop caring what people think, because your nerve endings get frayed.”

In his personal life, Burke says his approach to humor errs on the risky side.

“Comedy can disarm tension. It can bridge divides. It can humanize a room, especially when you’re an underdog or an outsider,” he said. “Sometimes telling a dirty joke at a fancy dinner party is like, ‘Oh, we’re going there. Everyone loves a dirty joke, and now we’re all sharing dirty jokes, and it’s OK. This is an R-rated dinner.’”

But of course, there’s always the risk of the joke going too far. In a fictionalized scenario that definitely wasn’t him, he lays out the rule of time and place.

“Sometimes, in doing a joke, it goes too far, and you learn from it, but you have to go too far sometimes to know where the line is,” he said. “I know you thought it was super funny to come downstairs wearing a bra on your head at the party, but we’re at my friend’s house, and that’s his girlfriend’s bra, and you don’t know them.”

But overall, the chance of being funny, Burke said, well outweigh the risks of being embarrassed, or falling off the tightrope.

“It’s a dream job,” he said. “It’s what I envisioned doing when I was a little kid, and I’d see ‘Saturday Night Live,’ or even ‘The Muppet Show.’ The idea of, there’s a show going on, and there’s insanity backstage, and there’s a Stormtrooper and free chickens and Gonzo and things are crashing and the show must go on.”

Simpler models can outperform deep learning at climate prediction

Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.

The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.

Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.

The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.

They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.

The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.

“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and director of the Center for Sustainability Science and Strategy.

Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.

Comparing emulators

Because the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.

Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.

But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.

The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.

Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.

“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.

Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.

They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.

Constructing a new evaluation

From there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.

“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.

Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.

“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.

Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.

“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.

Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.

The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.

This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.”

© Credit: MIT News; iStock

Simple climate prediction models can outperform deep-learning approaches when predicting future temperature changes, but deep learning has potential for estimating more complex variables like rainfall, according to an MIT study.

On the joys of being head of house at McCormick Hall

While sharing a single cup of coffee, Raul Radovitzky, the Jerome C. Hunsaker Professor in the Department of Aeronautics and Astronautics, and his wife Flavia Cardarelli, senior administrative assistant in the Institute for Data, Systems, and Society, recently discussed the love they have for their “nighttime jobs” living in McCormick Hall as faculty heads of house, and explained why it is so gratifying for them to be a part of this community.

The couple, married for 32 years, first met playing in a sandbox at the age of 3 in Argentina (but didn't start dating until they were in their 20s). Radovitzky has been a part of the MIT ecosystem since 2001, while Cardarelli began working at MIT in 2006. They became heads of house at McCormick Hall, the only all-female residence hall on campus, in 2015, and recently applied to extend their stay.

“Our head-of-house role is always full of surprises. We never know what we’ll encounter, but we love it. Students think we do this just for them, but in truth, it’s very rewarding for us as well. It keeps us on our toes and brings a lot of joy,” says Cardarelli. “We like to think of ourselves as the cool aunt and uncle for the students,” Radovitzky adds.

Heads of house at MIT influence many areas of students’ development by acting as advisors and mentors to their residents. Additionally, they work closely with the residence hall’s student government, as well as staff from the Division of Student Life, to foster their community’s culture.

Vice Chancellor for Student Life Suzy Nelson explains, “Our faculty heads of house have the long view at MIT and care deeply about students’ academic and personal growth. We are fortunate to have such dedicated faculty who serve in this way. The heads of house enhance the student experience in so many ways — whether it is helping a student with a personal problem, hosting Thanksgiving dinner for students who were not able to go home, or encouraging students to get involved in new activities, they are always there for students.”

“Our heads of house help our students fully participate in residential life. They model civil discourse at community dinners, mentor and tutor residents, and encourage residents to try new things. With great expertise and aplomb, they formally and informally help our students become their whole selves,” says Chancellor Melissa Nobles.

“I love teaching, I love conducting research with my group, and I enjoy serving as a head of house. The community aspect is deeply meaningful to me. MIT has become such a central part of our lives. Our kids are both MIT graduates, and we are incredibly proud of them. We do have a life outside of MIT — weekends with friends and family, personal activities — but MIT is a big part of who we are. It’s more than a job; it’s a community. We live on campus, and while it can be intense and demanding, we really love it,” says Radovitzky.

Jessica Quaye ’20, a former resident of McCormick Hall, says, “what sets McCormick apart is the way Raul and Flavia transform the four dorm walls into a home for everyone. You might come to McCormick alone, but you never leave alone. If you ran into them somewhere on campus, you could be sure that they would call you out and wave excitedly. You could invite Raul and Flavia to your concerts and they would show up to support your extracurricular endeavors. They built an incredible family that carries the fabric of MIT with a blend of academic brilliance, a warm open-door policy, and unwavering support for our extracurricular pursuits.”

Soundbytes

Q: What first drew you to the heads of house role?

Radovitzky: I had been aware of the role since I arrived at MIT, and over time, I started to wonder if it might be something we’d consider. When our kids were young, it didn’t seem feasible — we lived in the suburbs, and life there was good. But I always had an innate interest in building stronger connections with the student community.

Later, several colleagues encouraged us to apply. I discussed it with the family. Everyone was excited about it. Our teenagers were thrilled by the idea of living on a college campus. We applied together, submitting a letter as a family explaining why we were so passionate about it. We interviewed at McCormick, Baker, and McGregor. When we were offered McCormick, I’ll admit — I was nervous. I wasn’t sure I’d be the right fit for an all-female residence.

Cardarelli: We would have been nervous no matter where we ended up, but McCormick felt like home. It suited us in ways we didn’t anticipate. Raul, for instance, discovered he had a real rapport with the students, telling goofy jokes, making karaoke playlists, and learning about Taylor Swift and Nicki Minaj.

Radovitzky: It’s true! I never knew I’d become an expert at picking karaoke playlists. But we found our rhythm here, and it’s been deeply rewarding.

Q: What makes the McCormick community special?

Radovitzky: McCormick has a unique spirit. I can step out of our apartment and be greeted by 10 smiling faces. That energy is contagious. It’s not just about events or programming — it’s about building trust. We’ve built traditions around that, like our “make your own pizza” nights in our apartment, a wonderful McCormick event we inherited from our predecessors. We host four sessions each spring in which students roll out dough, choose toppings, and we chat as we cook and eat together. Everyone remembers the pizza nights — they’re mentioned in every testimonial.

Cardarelli: We’ve been lucky to have amazing graduate resident assistants and area directors every year. They’re essential partners in building community. They play a key role in creating community and supporting the students on their floors. They help with everything — from tutoring to events to walking students to urgent care if needed.

Radovitzky: In the fall, we take our residents to Crane Beach and host a welcome brunch. Karaoke in our apartment is a big hit too, and a unique way to make them comfortable coming to our apartment from day one. We do it three times a year — during orientation, and again each semester.

Cardarelli: We also host monthly barbecues open to all dorms and run McFast, our first-year tutoring program. Raul started by tutoring physics and math, four hours a week. Now, upperclass students lead most of the sessions. It’s great for both academic support and social connection.

Radovitzky: We also have an Independent Activities Period pasta night tradition. We cook for around 100 students, using four sauces that Flavia makes from scratch — bolognese, creamy mushroom, marinara, and pesto. Students love it.

Q: What’s unique about working in an all-female residence hall?

Cardarelli: I’ve helped students hem dresses, bake, and even apply makeup. It’s like having hundreds of daughters.

Radovitzky: The students here are incredibly mature and engaged. They show real interest in us as people. Many of the activities and connections we’ve built wouldn’t be possible in a different setting. Every year during “de-stress night,” I get my nails painted every color and have a face mask on. During “Are You Smarter Than an MIT Professor,” they dunk me in a water tank.

© Photo: Sarah Foote

Flavia Cardarelli (left) and Raul Radovitzky pose in front of McCormick Hall.

Trump shooting and Biden exit flipped social media from hostility to solidarity

The Trump assassination attempt on the front page of German newspaper Bild.

While previous research shows outrage and division drive engagement on social media, a new study of digital behaviour during the 2024 US election finds that this effect flips during a major crisis – when “ingroup solidarity” becomes the engine of online virality.

Psychologists say the findings show positive emotions such as unity can cut through the hostility on social media, but it takes a shock to the system that threatens a community.  

In a little over a week during the summer of 2024, the attempted assassination of Donald Trump at a rally (July 13) and Joe Biden’s suspension of his re-election campaign (21 July) completely reshaped the presidential race.

The University of Cambridge’s Social Decision-Making Lab collected over 62,000 public posts from the Facebook accounts of hundreds of US politicians, commentators and media outlets before and after these events to see how they affected online behaviour.*

“We wanted to understand the kinds of content that went viral among Republicans and Democrats during this period of high tension for both groups,” said Malia Marks, PhD candidate in Cambridge’s Department of Psychology and lead author of the study, published in the journal Proceedings of the National Academy of Sciences.

“Negative emotions such as anger and outrage along with hostility towards opposing political groups are usually rocket fuel for social media engagement. You might expect this to go into hyperdrive during times of crisis and external threat.”

“However, we found the opposite. It appears that political crises evoke not so much outgroup hate but rather ingroup love,” said Marks.

Just after the Trump assassination attempt, Republican-aligned posts signalling unity and shared identity received 53% more engagement than those that did not – an increase of 17 percentage points compared to just before the shooting.

These included posts such as evangelist Franklin Graham thanking God that Donald Trump is alive, and Fox News commentator Laura Ingraham posting: “Bleeding and unbowed, Trump faces relentless attacks yet stands strong for America. This is why his followers remain passionately loyal.”

At the same time, engagement levels for Republican posts attacking the Democrats saw a decrease of 23 percentage points from just a few days earlier.

After Biden suspended his re-election campaign, Democrat-aligned posts expressing solidarity received 91% more engagement than those that did not – a major increase of 71 percentage points over the period shortly before his withdrawal.

Posts included former US Secretary of Labor Robert Reich calling Biden “one of our most pro-worker presidents”, and former House Speaker Nancy Pelosi posting that Biden’s “legacy of vision, values and leadership make him one of the most consequential Presidents in American history.”

Biden’s withdrawal saw the continuation of a gradual rise in engagement for Democrat posts attacking Republicans – although over the 25 July days covered by the analysis almost a quarter of all conservative posts displayed “outgroup hostility” compared to just 5% of liberal posts.

Research led by the same Cambridge Lab, published in 2021, showed how social media posts criticizing or mocking those on the rival side of an ideological divide typically receive twice as many shares as posts that champion one’s own side.

“Social media platforms such as Twitter and Facebook are increasingly seen as creating toxic information environments that intensify social and political divisions, and there is plenty of research now to support this,” said Yara Kyrychenko, study co-author and PhD candidate in Cambridge’s Social Decision-Making Lab.

“Yet we see that social media can produce a rally-round-the-flag effect at moments of crisis, when the emotional and psychological preference for one’s own group takes over as the dominant driver of online behaviour.”

Last year, the Cambridge team (led by Kyrychenko) published a study of 1.6 million Ukrainian social media posts in the months before and after Russia’s full-scale invasion in February of 2022.

Following the invasion they found a similar spike for “ingroup solidarity” posts, which got 92% more engagement on Facebook and 68% more on Twitter, while posts hostile to Russia received little extra engagement. 

Researchers argue that the findings from the latest study are even more surprising, given the gravity of the threat to Ukraine and the nature of its population.

“We didn’t know whether moments of political rather than existential crisis would trigger solidarity in a country as deeply polarised as the United States. But even here, group unity surged when leadership was threatened,” said Dr Jon Roozenbeek, Lecturer in Psychology at Cambridge University and senior author of the study.

“In times of crisis, ingroup love may matter more to us than outgroup hate on social media.”


* The study used 62,118 public posts from 484 Facebook accounts run by US politicians and partisan commentators or media sources from 5-29 July 2024.

Research reveals how political crises cause a shift in the force behind viral online content ‘from outgroup hate to ingroup love’.

It appears that political crises evoke not so much outgroup hate but rather ingroup love
Malia Marks
The Trump assassination attempt on the front page of German newspaper Bild.

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes
Licence type: 

NUS receives S$3 million gift from Mapletree to strengthen service-learning courses and uplift over 60,000 seniors, vulnerable families

The National University of Singapore (NUS) has received a generous S$3 million pledge from global real estate powerhouse Mapletree Investments (Mapletree) to strengthen service-learning courses that will empower over 4,000 student volunteers annually to uplift more than 60,000 beneficiaries. This collaboration underscores that everyone has a role in building a "we-first" society where the government, community and corporate partners work together to create a more inclusive Singapore.

This milestone moment was graced by Guest-of-Honour Ms Low Yen Ling, Senior Minister of State, Ministry of Culture, Community and Youth & Ministry of Trade and Industry, on 25 August 2025 at Mapletree Business City.

Under the Communities and Engagement (C&E) pillar of the General Education curriculum, NUS undergraduate students across various disciplines can read service-learning courses as part of their graduation requirements. These courses encourage deep reflection and constructive actions on societal needs and real-world issues such as inequality and poverty which underprivileged and disadvantaged communities struggle with.

As the Principal Founding Donor and one of the largest donors towards the C&E Pillar with a focus on Seniors and Vulnerable Families, Mapletree’s contribution will sustain and expand NUS C&E courses such as GEN2060 Reconnect SeniorsSG, GEN2061 Support Healthy AgeingSG, GEN2062 Community Activities for Seniors with SG Cares, and GEN2070 Community Link (ComLink) Befrienders. Since the pilot launch of the C&E Pillar (Seniors and Vulnerable Families) in Academic Year 2022/2023, over 5,000 students have completed, or are currently enrolled in, these courses.

These service-learning courses run up to a year, encouraging students to take initiative in community service while developing critical thinking about complex social challenges.

Beyond volunteering, students also reflect, analyse and create solutions. The impact of this approach is multifaceted – beneficiaries find companionship and renewed hope; community partners gain extra help on the ground; students cultivate life-long values, and the ripple effects strengthen Singapore’s social fabric.

NUS President Professor Tan Eng Chye said, “We are grateful to Mapletree for this generous contribution, which will greatly enhance the impact of our service-learning courses. By empowering our students to serve the community, we are nurturing among the next generation empathy and a deeper awareness of societal needs among the disadvantaged and underprivileged. At the same time, we create opportunities for our students to do their part and support Singapore’s ageing population and lower-income families. These efforts reinforce our commitment to make a positive impact on society and community through our mission in education.”

Mr Edmund Cheng, Chairman of Mapletree, said, “Mapletree’s latest Corporate Social Responsibility (CSR) initiative with NUS aligns with two of our four CSR pillars – Healthcare and Education. Through our gift, as part of our US$10 million commitment to Temasek Trust’s Philanthropy Asia Alliance (PAA), these courses create beautiful bridges to facilitate relationships between students, seniors and vulnerable families, enriching the lives of all involved. We will continue to invest in the communities where we operate, strengthening the social fabric in meaningful ways.”

In 2023, Temasek Trust announced the launch of PAA to drive positive impact across Asia and mobilise collective philanthropic partnerships and strategies addressing global environmental and social challenges.

With Mapletree’s support, students will deepen their role as volunteers to implement hands-on initiatives to engage seniors and disadvantaged families. For example, as part of GEN2062, students facilitate thoughtfully designed activities at Active Ageing Centres and Senior Care Centres that stimulate cognitive function, enhance physical health, and improve the holistic well-being of elderly participants. Students in GEN2060 and GEN2070 conduct home visits to befriend seniors and vulnerable families, while students in GEN2061 share vital information about government assistance schemes with seniors through door-to-door visits. A small part of the gift will also go towards empowering student initiatives in other C&E courses to support these sectors.

Ms Cheryl Lim, Manager of Programmes at NTUC Health Senior Day Care, said, “We are heartened to witness the smiles on the seniors’ faces, made possible by the diverse range of activities organised by NUS students. The contributions of the students have been truly invaluable—their initiative in planning and leading these activities has not only enriched the lives of our seniors but also provided our team with interesting ideas. We truly appreciate the long-term, sustained collaboration and partnership with NUS that helps foster intergenerational bonds and nurture a continued sense of belonging, especially in our seniors, and within the broader community across generations.”

Complementing the efforts by community partners to promote better health and social engagement among seniors, the courses under the C&E pillar also enable NUS students to experience personal growth through their interaction and involvement with the seniors.

Mr Sean Ang Teng Han, a second-year student from the NUS Faculty of Science, who is currently reading the GEN2062 course, said, “It challenged me to step up and develop soft skills I did not have in the past, such as managing group dynamics, holding the attention of a crowd, and adapting quickly to different personalities. Learning how to engage and entertain a group of elderly participants has taught me a lot about communication and leadership in a more communal setting.”

Mapletree’s gift is the latest in a long-standing partnership with NUS that began over a decade ago with the establishment of the Mapletree Bursary in 2012. With a total endowment of S$900,000 to date, the Bursary has provided over 130 students with financial support, removing barriers that may otherwise pose challenges to their education journey. With this latest gift, Mapletree hopes to promote intergenerational bonding, a volunteering culture among youth, and opportunities to enhance age-in-place initiatives, one of the many efforts to address one of Singapore’s most pressing societal challenges – a rapidly ageing population, with one in four residents reaching the age of 65 by 2030.

New technologies tackle brain health assessment for the military

Cognitive readiness denotes a person's ability to respond and adapt to the changes around them. This includes functions like keeping balance after tripping, or making the right decision in a challenging situation based on knowledge and past experiences. For military service members, cognitive readiness is crucial for their health and safety, as well as mission success. Injury to the brain is a major contributor to cognitive impairment, and between 2000 and 2024, more than 500,000 military service members were diagnosed with traumatic brain injury (TBI) — caused by anything from a fall during training to blast exposure on the battlefield. While impairment from factors like sleep deprivation can be treated through rest and recovery, others caused by injury may require more intense and prolonged medical attention.

"Current cognitive readiness tests administered to service members lack the sensitivity to detect subtle shifts in cognitive performance that may occur in individuals exposed to operational hazards," says Christopher Smalt, a researcher in the laboratory's Human Health and Performance Systems Group. "Unfortunately, the cumulative effects of these exposures are often not well-documented during military service or after transition to Veterans Affairs, making it challenging to provide effective support."

Smalt is part of a team at the laboratory developing a suite of portable diagnostic tests that provide near-real-time screening for brain injury and cognitive health. One of these tools, called READY, is a smartphone or tablet app that helps identify a potential change in cognitive performance in less than 90 seconds. Another tool, called MINDSCAPE — which is being developed in collaboration with Richard Fletcher, a visiting scientist in the Rapid Prototyping Group who leads the Mobile Technology Lab at the MIT Auto-ID Laboratory, and his students — uses virtual reality (VR) technology for a more in-depth analysis to pinpoint specific conditions such as TBI, post-traumatic stress disorder, or sleep deprivation. Using these tests, medical personnel on the battlefield can make quick and effective decisions for treatment triage.

Both READY and MINDSCAPE are a response to a series of Congressional legislation mandates, military program requirements, and mission-driven health needs to improve brain health screening capabilities for service members.

Cognitive readiness biomarkers

The READY and MINDSCAPE platforms incorporate more than a decade of laboratory research on finding the right indicators of cognitive readiness to build into rapid testing applications. Thomas Quatieri oversaw this work and identified balance, eye movement, and speech as three reliable biomarkers. He is leading the effort at Lincoln Laboratory to develop READY.

"READY stands for Rapid Evaluation of Attention for DutY, and is built on the premise that attention is the key to being 'ready' for a mission," he says. "In one view, we can think of attention as the mental state that allows you to focus on a task."

For someone to be attentive, their brain must continuously anticipate and process incoming sensory information and then instruct the body to respond appropriately. For example, if a friend yells "catch" and then throws a ball in your direction, in order to catch that ball, your brain must process the incoming auditory and visual data, predict in advance what may happen in the next few moments, and then direct your body to respond with an action that synchronizes those sensory data. The result? You realize from hearing the word "catch" and seeing the moving ball that your friend is throwing the ball to you, and you reach out a hand to catch it just in time.

"An unhealthy or fatigued brain — caused by TBI or sleep deprivation, for example — may have challenges within a neurosensory feed-forward [prediction] or feedback [error] system, thus hampering the person's ability to attend," Quatieri says.

READY's three tests measure a person’s ability to track a moving dot with their eye, balance, and hold a vowel fixed at one pitch. The app then uses the data to calculate a variability or "wobble" indicator, which represents changes from the test taker's baseline or from expected results based on others with similar demographics, or the general population. The results are displayed to the user as an indication of the patient's level of attention.

If the READY screen shows an impairment, the administrator can then direct the subject to follow up with MINDSCAPE, which stands for Mobile Interface for Neurological Diagnostic Situational Cognitive Assessment and Psychological Evaluation. MINDSCAPE uses VR technology to administer additional, in-depth tests to measure cognitive functions such as reaction time and working memory. These standard neurocognitive tests are recorded with multimodal physiological sensors, such as electroencephalography (EEG), photoplethysmography, and pupillometry, to better pinpoint diagnosis.

Holistic and adaptable

A key advantage of READY and MINDSCAPE is their ability to leverage existing technologies, allowing for rapid deployment in the field. By utilizing sensors and capabilities already integrated into smartphones, tablets, and VR devices, these assessment tools can be easily adapted for use in operational settings at a significantly reduced cost.

"We can immediately apply our advanced algorithms to the data collected from these devices, without the need for costly and time-consuming hardware development," Smalt says. "By harnessing the capabilities of commercially available technologies, we can quickly provide valuable insights and improve upon traditional assessment methods."

Bringing new capabilities and AI for brain-health sensing into operational environments is a theme across several projects at the laboratory. Another example is EYEBOOM (Electrooculography and Balance Blast Overpressure Monitoring System), a wearable technology developed for the U.S. Special Forces to monitor blast exposure. EYEBOOM continuously monitors a wearer's eye and body movements as they experience blast energy, and warns of potential harm. For this program, the laboratory developed an algorithm that could identify a potential change in physiology resulting from blast exposure during operations, rather than waiting for a check-in.

All three technologies are in development to be versatile, so they can be adapted for other relevant uses. For example, a workflow could pair EYEBOOM's monitoring capabilities with the READY and MINDSCAPE tests: EYEBOOM would continuously monitor for exposure risk and then prompt the wearer to seek additional assessment.

"A lot of times, research focuses on one specific modality, whereas what we do at the laboratory is search for a holistic solution that can be applied for many different purposes," Smalt says.

MINDSCAPE is undergoing testing at the Walter Reed National Military Center this year. READY will be tested with the U.S. Army Research Institute of Environmental Medicine (USARIEM) in 2026 in the context of sleep deprivation. Smalt and Quatieri also see the technologies finding use in civilian settings — on sporting event sidelines, in doctors' offices, or wherever else there is a need to assess brain readiness.

MINDSCAPE is being developed with clinical validation and support from Stefanie Kuchinsky at the Walter Reed National Military Medical Center. Quatieri and his team are developing the READY tests in collaboration with Jun Maruta and Jam Ghajar from the Brain Trauma Foundation (BTF), and Kristin Heaton from USARIEM. The tests are supported by concurrent evidence-based guidelines lead by the BTF and the Military TBI Initiative at Uniform Services University.

© Image: Tammy Ko/Lincoln Laboratory

Lincoln Laboratory researchers are building rapid brain health screening capabilities for military service members.

Can large language models figure out the real world?

Back in the 17th century, German astronomer Johannes Kepler figured out the laws of motion that made it possible to accurately predict where our solar system’s planets would appear in the sky as they orbit the sun. But it wasn’t until decades later, when Isaac Newton formulated the universal laws of gravitation, that the underlying principles were understood. Although they were inspired by Kepler’s laws, they went much further, and made it possible to apply the same formulas to everything from the trajectory of a cannon ball to the way the moon’s pull controls the tides on Earth — or how to launch a satellite from Earth to the surface of the moon or planets.

Today’s sophisticated artificial intelligence systems have gotten very good at making the kind of specific predictions that resemble Kepler’s orbit predictions. But do they know why these predictions work, with the kind of deep understanding that comes from basic principles like Newton’s laws? As the world grows ever-more dependent on these kinds of AI systems, researchers are struggling to try to measure just how they do what they do, and how deep their understanding of the real world actually is.

Now, researchers in MIT’s Laboratory for Information and Decision Systems (LIDS) and at Harvard University have devised a new approach to assessing how deeply these predictive systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one. And by and large the answer at this point, in the examples they studied, is — not so much.

The findings were presented at the International Conference on Machine Learning, in Vancouver, British Columbia, last month by Harvard postdoc Keyon Vafa, MIT graduate student in electrical engineering and computer science and LIDS affiliate Peter G. Chang, MIT assistant professor and LIDS principal investigator Ashesh Rambachan, and MIT professor, LIDS principal investigator, and senior author Sendhil Mullainathan.

“Humans all the time have been able to make this transition from good predictions to world models,” says Vafa, the study’s lead author. So the question their team was addressing was, “have foundation models — has AI — been able to make that leap from predictions to world models? And we’re not asking are they capable, or can they, or will they. It’s just, have they done it so far?” he says.

“We know how to test whether an algorithm predicts well. But what we need is a way to test for whether it has understood well,” says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science and the senior author on the study. “Even defining what understanding means was a challenge.” 

In the Kepler versus Newton analogy, Vafa says, “they both had models that worked really well on one task, and that worked essentially the same way on that task. What Newton offered was ideas that were able to generalize to new tasks.” That capability, when applied to the predictions made by various AI systems, would entail having it develop a world model so it can “transcend the task that you’re working on and be able to generalize to new kinds of problems and paradigms.”

Another analogy that helps to illustrate the point is the difference between centuries of accumulated knowledge of how to selectively breed crops and animals, versus Gregor Mendel’s insight into the underlying laws of genetic inheritance.

“There is a lot of excitement in the field about using foundation models to not just perform tasks, but to learn something about the world,” for example in the natural sciences, he says. “It would need to adapt, have a world model to adapt to any possible task.”

Are AI systems anywhere near the ability to reach such generalizations? To test the question, the team looked at different examples of predictive AI systems, at different levels of complexity. On the very simplest of examples, the systems succeeded in creating a realistic model of the simulated system, but as the examples got more complex that ability faded fast.

The team developed a new metric, a way of measuring quantitatively how well a system approximates real-world conditions. They call the measurement inductive bias — that is, a tendency or bias toward responses that reflect reality, based on inferences developed from looking at vast amounts of data on specific cases.

The simplest level of examples they looked at was known as a lattice model. In a one-dimensional lattice, something can move only along a line. Vafa compares it to a frog jumping between lily pads in a row. As the frog jumps or sits, it calls out what it’s doing — right, left, or stay. If it reaches the last lily pad in the row, it can only stay or go back. If someone, or an AI system, can just hear the calls, without knowing anything about the number of lily pads, can it figure out the configuration? The answer is yes: Predictive models do well at reconstructing the “world” in such a simple case. But even with lattices, as you increase the number of dimensions, the systems no longer can make that leap.

“For example, in a two-state or three-state lattice, we showed that the model does have a pretty good inductive bias toward the actual state,” says Chang. “But as we increase the number of states, then it starts to have a divergence from real-world models.”

A more complex problem is a system that can play the board game Othello, which involves players alternately placing black or white disks on a grid. The AI models can accurately predict what moves are allowable at a given point, but it turns out they do badly at inferring what the overall arrangement of pieces on the board is, including ones that are currently blocked from play.

The team then looked at five different categories of predictive models actually in use, and again, the more complex the systems involved, the more poorly the predictive modes performed at matching the true underlying world model.

With this new metric of inductive bias, “our hope is to provide a kind of test bed where you can evaluate different models, different training approaches, on problems where we know what the true world model is,” Vafa says. If it performs well on these cases where we already know the underlying reality, then we can have greater faith that its predictions may be useful even in cases “where we don’t really know what the truth is,” he says.

People are already trying to use these kinds of predictive AI systems to aid in scientific discovery, including such things as properties of chemical compounds that have never actually been created, or of potential pharmaceutical compounds, or for predicting the folding behavior and properties of unknown protein molecules. “For the more realistic problems,” Vafa says, “even for something like basic mechanics, we found that there seems to be a long way to go.”

Chang says, “There’s been a lot of hype around foundation models, where people are trying to build domain-specific foundation models — biology-based foundation models, physics-based foundation models, robotics foundation models, foundation models for other types of domains where people have been collecting a ton of data” and training these models to make predictions, “and then hoping that it acquires some knowledge of the domain itself, to be used for other downstream tasks.”

This work shows there’s a long way to go, but it also helps to show a path forward. “Our paper suggests that we can apply our metrics to evaluate how much the representation is learning, so that we can come up with better ways of training foundation models, or at least evaluate the models that we’re training currently,” Chang says. “As an engineering field, once we have a metric for something, people are really, really good at optimizing that metric.”

© Image: iStock

Researchers at MIT and Harvard University have devised a new approach to assessing how deeply predictive AI systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one.

Can large language models figure out the real world?

Back in the 17th century, German astronomer Johannes Kepler figured out the laws of motion that made it possible to accurately predict where our solar system’s planets would appear in the sky as they orbit the sun. But it wasn’t until decades later, when Isaac Newton formulated the universal laws of gravitation, that the underlying principles were understood. Although they were inspired by Kepler’s laws, they went much further, and made it possible to apply the same formulas to everything from the trajectory of a cannon ball to the way the moon’s pull controls the tides on Earth — or how to launch a satellite from Earth to the surface of the moon or planets.

Today’s sophisticated artificial intelligence systems have gotten very good at making the kind of specific predictions that resemble Kepler’s orbit predictions. But do they know why these predictions work, with the kind of deep understanding that comes from basic principles like Newton’s laws? As the world grows ever-more dependent on these kinds of AI systems, researchers are struggling to try to measure just how they do what they do, and how deep their understanding of the real world actually is.

Now, researchers in MIT’s Laboratory for Information and Decision Systems (LIDS) and at Harvard University have devised a new approach to assessing how deeply these predictive systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one. And by and large the answer at this point, in the examples they studied, is — not so much.

The findings were presented at the International Conference on Machine Learning, in Vancouver, British Columbia, last month by Harvard postdoc Keyon Vafa, MIT graduate student in electrical engineering and computer science and LIDS affiliate Peter G. Chang, MIT assistant professor and LIDS principal investigator Ashesh Rambachan, and MIT professor, LIDS principal investigator, and senior author Sendhil Mullainathan.

“Humans all the time have been able to make this transition from good predictions to world models,” says Vafa, the study’s lead author. So the question their team was addressing was, “have foundation models — has AI — been able to make that leap from predictions to world models? And we’re not asking are they capable, or can they, or will they. It’s just, have they done it so far?” he says.

“We know how to test whether an algorithm predicts well. But what we need is a way to test for whether it has understood well,” says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science and the senior author on the study. “Even defining what understanding means was a challenge.” 

In the Kepler versus Newton analogy, Vafa says, “they both had models that worked really well on one task, and that worked essentially the same way on that task. What Newton offered was ideas that were able to generalize to new tasks.” That capability, when applied to the predictions made by various AI systems, would entail having it develop a world model so it can “transcend the task that you’re working on and be able to generalize to new kinds of problems and paradigms.”

Another analogy that helps to illustrate the point is the difference between centuries of accumulated knowledge of how to selectively breed crops and animals, versus Gregor Mendel’s insight into the underlying laws of genetic inheritance.

“There is a lot of excitement in the field about using foundation models to not just perform tasks, but to learn something about the world,” for example in the natural sciences, he says. “It would need to adapt, have a world model to adapt to any possible task.”

Are AI systems anywhere near the ability to reach such generalizations? To test the question, the team looked at different examples of predictive AI systems, at different levels of complexity. On the very simplest of examples, the systems succeeded in creating a realistic model of the simulated system, but as the examples got more complex that ability faded fast.

The team developed a new metric, a way of measuring quantitatively how well a system approximates real-world conditions. They call the measurement inductive bias — that is, a tendency or bias toward responses that reflect reality, based on inferences developed from looking at vast amounts of data on specific cases.

The simplest level of examples they looked at was known as a lattice model. In a one-dimensional lattice, something can move only along a line. Vafa compares it to a frog jumping between lily pads in a row. As the frog jumps or sits, it calls out what it’s doing — right, left, or stay. If it reaches the last lily pad in the row, it can only stay or go back. If someone, or an AI system, can just hear the calls, without knowing anything about the number of lily pads, can it figure out the configuration? The answer is yes: Predictive models do well at reconstructing the “world” in such a simple case. But even with lattices, as you increase the number of dimensions, the systems no longer can make that leap.

“For example, in a two-state or three-state lattice, we showed that the model does have a pretty good inductive bias toward the actual state,” says Chang. “But as we increase the number of states, then it starts to have a divergence from real-world models.”

A more complex problem is a system that can play the board game Othello, which involves players alternately placing black or white disks on a grid. The AI models can accurately predict what moves are allowable at a given point, but it turns out they do badly at inferring what the overall arrangement of pieces on the board is, including ones that are currently blocked from play.

The team then looked at five different categories of predictive models actually in use, and again, the more complex the systems involved, the more poorly the predictive modes performed at matching the true underlying world model.

With this new metric of inductive bias, “our hope is to provide a kind of test bed where you can evaluate different models, different training approaches, on problems where we know what the true world model is,” Vafa says. If it performs well on these cases where we already know the underlying reality, then we can have greater faith that its predictions may be useful even in cases “where we don’t really know what the truth is,” he says.

People are already trying to use these kinds of predictive AI systems to aid in scientific discovery, including such things as properties of chemical compounds that have never actually been created, or of potential pharmaceutical compounds, or for predicting the folding behavior and properties of unknown protein molecules. “For the more realistic problems,” Vafa says, “even for something like basic mechanics, we found that there seems to be a long way to go.”

Chang says, “There’s been a lot of hype around foundation models, where people are trying to build domain-specific foundation models — biology-based foundation models, physics-based foundation models, robotics foundation models, foundation models for other types of domains where people have been collecting a ton of data” and training these models to make predictions, “and then hoping that it acquires some knowledge of the domain itself, to be used for other downstream tasks.”

This work shows there’s a long way to go, but it also helps to show a path forward. “Our paper suggests that we can apply our metrics to evaluate how much the representation is learning, so that we can come up with better ways of training foundation models, or at least evaluate the models that we’re training currently,” Chang says. “As an engineering field, once we have a metric for something, people are really, really good at optimizing that metric.”

© Image: iStock

Researchers at MIT and Harvard University have devised a new approach to assessing how deeply predictive AI systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one.

Mediterranean diet offsets genetic risk for dementia, study finds

Health

Mediterranean diet offsets genetic risk for dementia, study finds

Greatest benefit for those with highest predisposition to Alzheimer’s disease

Mass General Brigham Communications

4 min read
Fish, vegetables, and other foods found in Mediterranean diet.

New research suggests that following a Mediterranean-style diet may help offset a person’s genetic risk for developing Alzheimer’s disease.

The study, published in Nature Medicine and led by investigators from Mass General Brigham, Harvard T.H. Chan School of Public Health, and the Broad Institute of MIT and Harvard, found that people at the highest genetic risk for Alzheimer’s disease who followed a Mediterranean diet — rich in vegetables, fruits, nuts, whole grains, and low in red and processed meats — showed slower cognitive decline as well as a greater reduction in dementia risk than those at lower genetic risk.

“One reason we wanted to study the Mediterranean diet is because it is the only dietary pattern that has been causally linked to cognitive benefits in a randomized trial,” said study first author Yuxi Liu, a research fellow in the Department of Medicine at Brigham and Women’s Hospital and a postdoctoral fellow at the Harvard Chan School and the Broad. “We wanted to see whether this benefit might be different in people with varying genetic backgrounds, and to examine the role of blood metabolites, the small molecules that reflect how the body processes food and carries out normal functions.”

“These findings suggest that dietary strategies could help reduce the risk of cognitive decline and stave off dementia by broadly influencing key metabolic pathways.” 

Yuxi Liu, study’s first author

Over the last few decades, researchers have learned more about the genetic and metabolic basis of Alzheimer’s disease and related dementias. These are among the most common causes of cognitive decline in older adults. Alzheimer’s disease is known to have a strong genetic component, with heritability estimated at up to 80 percent.

One gene in particular, apolipoprotein E, or APOE, has emerged as the strongest genetic risk factor for sporadic Alzheimer’s disease — the more common type develops later in life and is not directly inherited in a predictable pattern. People who carry one copy of the APOE4 variant have a three- to fourfold higher risk of developing Alzheimer’s. People with two copies of the APOE4 variant have a 12-fold higher risk of Alzheimer’s than those without.

To explore how the Mediterranean diet may reduce dementia risk and influence blood metabolites linked to cognitive health, the team analyzed data from 4,215 women in the Nurses’ Health Study, following participants from 1989 to 2023 (average age 57 at baseline). To validate their findings, the researchers analyzed similar data from 1,490 men in the Health Professionals Follow-Up Study, followed from 1993 to 2023.

Researchers evaluated long-term dietary patterns using food frequency questionnaires and examined participants’ blood samples for a broad range of metabolites. Genetic data were used to assess each participant’s inherited risk for Alzheimer’s disease. Participants were then followed over time for new cases of dementia. A subset of 1,037 women underwent regular telephone-based cognitive testing.

They found that the people following a more Mediterranean-style diet had a lower risk of developing dementia and showed slower cognitive decline. The protective effect of the diet was strongest in the high-risk group with two copies of the APOE4 gene variant, suggesting that diet may help offset genetic risk.

“These findings suggest that dietary strategies, specifically the Mediterranean diet, could help reduce the risk of cognitive decline and stave off dementia by broadly influencing key metabolic pathways,” Liu said. “This recommendation applies broadly, but it may be even more important for individuals at a higher genetic risk, such as those carrying two copies of the APOE4 genetic variant.”

A study limitation was that the cohort consisted of well-educated individuals of European ancestry. More research is needed in diverse populations.

In addition, although the study reveals important associations, genetics and metabolomics are not yet part of most clinical risk prediction models for Alzheimer’s disease. People often don’t know their APOE genetics. More work is needed to translate these findings into routine medical practice.

“In future research, we hope to explore whether targeting specific metabolites through diet or other interventions could provide a more personalized approach to reducing dementia risk,” Liu said.


This study was funded in part by the National Institutes of Health.

Seeding solutions for bipolar disorder

Science & Tech

Seeding solutions for bipolar disorder

Human brain organoid showing the integration of excitatory (magenta) and inhibitory neurons (green) of the cerebral cortex.

Human brain organoid showing the integration of excitatory (magenta) and inhibitory neurons (green) of the cerebral cortex.

Credit: Arlotta Lab

Kermit Pattison

Harvard Staff Writer

9 min read

Brain Science grants promote new approaches to treat the condition and discover underlying causes

Paola Arlotta holds up a vial of clear fluid swirling with tiny orbs. When she shakes her wrist, the shapes flutter like the contents of a snow globe.

“Those small spheres swirling around are actually tiny pieces of human cerebral cortex,” said Arlotta, the Golub Family Professor of Stem Cell and Regenerative Biology, “except instead of coming from the brain of a person, they were made in the lab.”

Those minuscule shapes may represent a giant opportunity for breakthroughs into bipolar disorder, a mental health condition that affects about 8 million people in the U.S. These lab-grown “organoids” — brain-like tissue engineered from blood cells of living patients — offer a means to discover more effective drugs and develop more personalized treatments for bipolar patients.

Paola Arlotta.

Paola Arlotta.

Harvard file photo

The research effort is just one example of the diverse array of projects funded by the Bipolar Disorder Seed Grant Program of the Harvard Brain Science Initiative, a collaboration between the Faculty of Arts and Sciences (FAS) and Harvard Medical School (HMS). Over the last decade, the program has funded more than 90 projects across the University and affiliated hospitals and hosted five symposia. In some cases, the grants have enabled researchers to develop innovative approaches that subsequently won larger grants from major funding agencies and to publish their findings in prominent journals such as Nature.

“The goal for this grant program has always been to help creative scientists in our community initiate new avenues of research related to bipolar disorder,” said Venkatesh Murthy, co-director of the Harvard Brain Science Initiative and Raymond Leo Erikson Life Sciences Professor of Molecular & Cellular Biology. “New directions, as well as new thinkers, are vital for understanding and eventually curing this damaging disorder.”

The program began in 2015 with the first of a series of gifts from the Dauten Family Foundation and recently expanded thanks to a new gift from Sandra Lee Chen ’85 and Sidney Chen. Kent Dauten, M.B.A. ’79, and his wife, Liz, took up the cause after two of their four children were diagnosed with bipolar disorder despite no known family history of the illness. “The field is terribly underfunded and for too long was a discouraging corner of science because of the complexity of these brain disorders, but in recent years has become an exciting frontier for discovery,” said Kent Dauten. The Chens had similar motivations. “Bipolar disorder has touched our family,” said Sandra Chen. “Our experiences drive our commitment to help advance understanding of what causes this disruptive disorder.”

The program now provides each project with $174,000 spread over two years. The 11 projects funded this year will investigate bipolar disorder causes and treatments from perspectives including genetics, brain circuitry, sleep, immune dysregulation, stress hormones, and gut bacteria.

The seed grants seek to nurture “outside-the-box ideas,” Murthy said. He added, “Many of our grantees have made significant discoveries with this support.”

An unsolved problem

Bipolar disorder usually begins in adolescence and on average patients suffer from symptoms for nine years before they are diagnosed. It brings recurrent episodes of mania and depression — most often the latter.

The typical treatment involves mood stabilizer medications such as lithium. Some patients also are prescribed antipsychotic medications, but these can cause weight gain.

The disorder often brings other health challenges such as cardiovascular diseases, Type 2 diabetes, metabolic syndrome, and obesity. Patients have a life expectancy 12 to 14 years lower than average, and high rates of suicide.

The causes of bipolar remain unknown, but the disorder appears to arise from a complex mix of genetic, epigenetic, neurochemical, and environmental factors.

Basic science: When brain signaling goes awry

Extreme mood swings are a hallmark of bipolar disorder. Patients often veer between manic episodes (characterized by grandiosity, risky behaviors, compulsive talking, distractibility, and reduced need for sleep) to depressive periods (sullen moods, joylessness, weight changes, fatigue, inability to concentrate, indecisiveness, and suicidal thoughts).

Nao Uchida, a professor of molecular and cellular biology, suspects that one driver of this volatility is dopamine, a neurotransmitter that plays a key role in learning, memory, movement, motivation, mood, and attention.

Uchida studies the role of dopamine in animal learning and decision-making. Dopamine often is described as the brain’s “reward system,” but Uchida suggests it is better understood as an arbiter of predictions and their outcomes. Mood often depends not on the result itself, but instead on how much the outcome differs from expectations — what scientists call the reward prediction error (RPE).

A few years ago, Uchida became interested in how dysregulation of the dopamine system might offer insights into the swings of bipolar disorder.

“We had not done research related to these diseases before, so this seed grant really let me enter the field,” said Uchida.

The funds allowed his lab to test how manipulation of depressive or manic states altered the responses of dopamine neurons in mice. The team incorporated new revelations about how synapses became potentiated or depressed to make certain pathways stronger or weaker. Some of their early findings will soon be published in Nature Communications.

Uchida posits that the disorder may be linked to skewed signaling of the neurotransmitters involved in prediction and learning. When the dopamine baseline is high, the person may become biased to learn from positive outcomes and fail to heed negative ones — and thus become prone to taking dangerous risks or entering manic states. In contrast, when the dopamine baseline is low, people pay too much attention to negative outcomes and ignore positive ones — and this pessimism pushes them toward depression.

“A lot of our future predictions depend on our experiences,” said Uchida. “I think that process might be altered in various diseases, including depression, addiction, and bipolar disorders.”

Nao Uchida (left) and Louisa Sylvia.

Nao Uchida (left) and Louisa Sylvia.

Harvard file photo; courtesy photo

Clinical research: Reducing obesity

Louisa Sylvia got an intimate glimpse of bipolar disorder in her first job after college. Working as a clinical research coordinator in a bipolar clinic, she witnessed patients struggling with anxiety, depression, and other symptoms. Again and again, she saw patients gain weight after being prescribed medications.

“I quickly became disappointed by the options that were out there for individuals with bipolar,” recalled Sylvia, now an associate professor in the Department of Psychiatry at Mass General Hospital and HMS. “It was really just medications — medications that can have really bad side effects.”

Sylvia has devoted her career to finding better options. (She also is the author of “The Wellness Workbook for Bipolar Disorder: Your Guide to Getting Healthy and Improving Your Mood.”) Even with the best current medications and psychotherapy, many patients continue to suffer from depression and other side effects. To supplement standard therapies, she has sought to develop interventions involving diet, exercise, and wellness.

One promising strategy is time-restricted eating (TRE). Restricting meals to a limited window — say 8 a.m. to 6 p.m. — can result in weight loss, improved mood and cognition, and better sleep.

With the seed grant, Sylvia plans to conduct a trial to evaluate the effects of TRE on bipolar patients. The study will investigate how the regulation of eating habits affects weight, mood, cognition, quality of life, and sleep patterns. She will work with Leilah Grant, an instructor at HMS and researcher at Brigham and Women’s Hospital who specializes in sleep and circadian physiology.

“For individuals who are depressed or have difficulty with motivation or energy, TRE is actually considered one of the easier lifestyle inventions to adhere to,” said Sylvia, who also is associate director of the Dauten Family Center for Bipolar Treatment Innovation at MGH. “We’re basically just saying, ‘Don’t focus as much on what you eat, but rather when you are eating.’”

The seed grants seek to nurture promising approaches that might not get funded through other channels. Sylvia can attest to the value of this opportunity; she had two TRE grant applications for federal funding rejected.

“I look at it like an innovation grant to try something that’s a little bit different but won’t get funded by the normal channels,” she said.

Translational research: Brain avatars

Despite decades of research, the success rate of drugs for treating bipolar disorder remains frustratingly low. Lithium, the mainstay first-line treatment, fully benefits only about 30 percent of patients — but three-quarters of them also suffer from profound side effects.

Animal models do not always translate to human medicine. Among humans, responses vary greatly; some individuals benefit from drug treatments while others do not.

To address these shortcomings, Arlotta is developing an innovative method to test drugs on brain cells of people with bipolar — without putting the humans themselves at risk.

Her team has spent more than a decade developing human brain organoids. They begin by taking a single sample of blood from a person. Because blood cells carry copies of our DNA, they hold the instruction manuals that guide development from fetus to adult. With a series of biochemical signals, these blood cells are reprogrammed to become stem cells. The team then uses another set of signals to mimic the normal process of cell differentiation to grow human brain cells — except as cell cultures outside the body.

“You can grow thousands and thousands of brain organoids from any one of us,” said Arlotta. “If the blood comes from a patient with a disorder, then every single cell in that organoid carries the genome, and genetic risk, of that patient.”

These “avatars” — each about five millimeters in diameter — contain millions of brain cells and hundreds of different cell types. “That is the only experimental model of our brain that science has today,” she said. “It may not be possible to investigate the brain of a patient with bipolar disorder, but scientists might be able to use their avatars.”

In pilot studies, the Arlotta team created brain organoids from stem cells from two groups of bipolar patients: “lithium responders” who benefit from the drug and “lithium nonresponders” who do not. The researchers will test whether these organoids replicate the differences seen in living patients — and then use them to develop more effective therapeutic drugs.

But Arlotta knows that no single approach represents a panacea. Because bipolar disorder remains so mysterious, the seed grant program is valuable because it promotes many promising lines of research across disciplines.

“The program has the modesty of understanding that we know very little about bipolar disorder,” said Arlotta. “Therefore, we need to have multiple shots on goal.”

Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution

For both research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath, such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.

“The major advance here is to enable us to image deeper at single-cell resolution,” says neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.

In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general and electrical activity in neurons in particular, all the way through samples such as a 1.1-millimeter “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7-milimeter-thick slice of mouse brain tissue.

In fact, says co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, the system could have peered far deeper, but the test samples weren’t big enough to demonstrate that.

“That’s when we hit the glass on the other side,” he says. “I think we’re pretty confident about going deeper.”

Still, a depth of 1.1 milimeters is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy, all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.

Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur says). Meanwhile, although the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (about 10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”

“We merged all these techniques — three-photon, label-free, photoacoustic detection,” says co-lead author Tatsuya Osaki, a research scientist in the Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”

Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.

In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, that neuroscientists use to signal neural electrical activity.

With the concept of label-free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.

For instance, through the company Precision Healing, Inc., which he founded and sold, Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label-free (i.e., no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.

The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.

Lee says he expects that full imaging at depths of 2 milimeters in live brains is entirely feasible, given the results in the new study.

“In principle, it should work,” he says.

Mercedes Balcells and Elazer Edelman are also authors of the paper. Funding for the research came from sources including the National Institutes of Health, the Simon Center for the Social Brain, the lab of Peter So, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.

© Photo: Tatsuya Osaki

Researchers have developed a new microscope system to finely image molecules deep in live brain tissues, an advance that could boost neuroscience and clinical research.

Transforming boating, with solar power

The MIT Sailing Pavilion hosted an altogether different marine vessel recently: a prototype of a solar electric boat developed by James Worden ’89, the founder of the MIT Solar Electric Vehicle Team (SEVT). Worden visited the pavilion on a sizzling, sunny day in late July to offer students from the SEVT, the MIT Edgerton Center, MIT Sea Grant, and the broader community an inside look at the Anita, named for his late wife.

Worden’s fascination with solar power began at age 10, when he picked up a solar chip at a “hippy-like” conference in his hometown of Arlington, Massachusetts. “My eyes just lit up,” he says. He built his first solar electric vehicle in high school, fashioned out of cardboard and wood (taking first place at the 1984 Massachusetts Science Fair), and continued his journey at MIT, founding SEVT in 1986. It was through SEVT that he met his wife and lifelong business partner, Anita Rajan Worden ’90. Together, they founded two companies in the solar electric and hybrid vehicles space, and in 2022 launched a solar electric boat company.

On the Charles River, Worden took visitors for short rides on Anita, including a group of current SEVT students who peppered him with questions. The 20-foot pontoon boat, just 12 feet wide and 7 feet tall, is made of carbon fiber composites, single crystalline solar photovoltaic cells, and lithium iron phosphate battery cells. Ultimately, Worden envisions the prototype could have applications as mini-ferry boats and water taxis.

With warmth and humor, he drew parallels between the boat’s components and mechanics and those of the solar cars the students are building. “It’s fun! If you think about all the stuff you guys are doing, it’s all the same stuff,” he told them, “optimizing all the different systems and making them work.” He also explained the design considerations unique to boating applications, like refining the hull shape for efficiency and maneuverability in variable water and wind conditions, and the critical importance of protecting wiring and controls from open water and condensate.

“Seeing Anita in all its glory was super cool,” says Nicole Lin, vice captain of SEVT. “When I first saw it, I could immediately map the different parts of the solar car to its marine counterparts, which was astonishing to see how far I’ve come as an engineer with SEVT. James also explained the boat using solar car terms, as he drew on his experience with solar cars for his solar boats. It blew my mind to see the engineering we learned with SEVT in action.”

Over the years, the Wordens have been avid supporters of SEVT and the Edgerton Center, so the visit was, in part, a way to pay it forward to MIT. “There’s a lot of connections,” he says. He’s still awed by the fact that Harold “Doc” Edgerton, upon learning about his interest in building solar cars, carved out a lab space for him to use in Building 20 — as a first-year student. And a few years ago, as Worden became interested in marine vessels, he tapped Sea Grant Education Administrator Drew Bennett for a 90-minute whiteboard lecture, “MIT fire-hose style,” on hydrodynamics. “It was awesome!” he says.

© Photo: Sarah Foote

A group of visitors sets off from the dock for a cruise around the Charles River. The Anita weighs about 2,800 pounds and can accommodate six passengers at a time.

Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution

For both research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath, such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.

“The major advance here is to enable us to image deeper at single-cell resolution,” says neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.

In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general and electrical activity in neurons in particular, all the way through samples such as a 1.1-millimeter “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7-milimeter-thick slice of mouse brain tissue.

In fact, says co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, the system could have peered far deeper, but the test samples weren’t big enough to demonstrate that.

“That’s when we hit the glass on the other side,” he says. “I think we’re pretty confident about going deeper.”

Still, a depth of 1.1 milimeters is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy, all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.

Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur says). Meanwhile, although the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (about 10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”

“We merged all these techniques — three-photon, label-free, photoacoustic detection,” says co-lead author Tatsuya Osaki, a research scientist in the Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”

Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.

In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, that neuroscientists use to signal neural electrical activity.

With the concept of label-free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.

For instance, through the company Precision Healing, Inc., which he founded and sold, Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label-free (i.e., no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.

The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.

Lee says he expects that full imaging at depths of 2 milimeters in live brains is entirely feasible, given the results in the new study.

“In principle, it should work,” he says.

Mercedes Balcells and Elazer Edelman are also authors of the paper. Funding for the research came from sources including the National Institutes of Health, the Simon Center for the Social Brain, the lab of Peter So, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.

© Photo: Tatsuya Osaki

Researchers have developed a new microscope system to finely image molecules deep in live brain tissues, an advance that could boost neuroscience and clinical research.

Astronomers detect the brightest fast radio burst of all time

A fast radio burst is an immense flash of radio emission that lasts for just a few milliseconds, during which it can momentarily outshine every other radio source in its galaxy. These flares can be so bright that their light can be seen from halfway across the universe, several billion light years away.

The sources of these brief and dazzling signals are unknown. But scientists now have a chance to study a fast radio burst (FRB) in unprecedented detail. An international team of scientists including physicists at MIT have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major. It is one of the closest FRBs detected to date. It is also the brightest — so bright that the signal has garnered the informal moniker, RBFLOAT, for “radio brightest flash of all time.”

The burst’s brightness, paired with its proximity, is giving scientists the closest look yet at FRBs and the environments from which they emerge.

“Cosmically speaking, this fast radio burst is just in our neighborhood,” says Kiyoshi Masui, associate professor of physics and affiliate of MIT’s Kavli Institute for Astrophysics and Space Research. “This means we get this chance to study a pretty normal FRB in exquisite detail.”

Masui and his colleagues report their findings today in the Astrophysical Journal Letters.

Diverse bursts

The clarity of the new detection is thanks to a significant upgrade to The Canadian Hydrogen Intensity Mapping Experiment (CHIME), a large array of halfpipe-shaped antennae based in British Columbia. CHIME was originally designed to detect and map the distribution of hydrogen across the universe. The telescope is also sensitive to ultrafast and bright radio emissions. Since it started observations in 2018, CHIME has detected about 4,000 fast radio bursts, from all parts of the sky. But the telescope had not been able to precisely pinpoint the location of each fast radio burst, until now.

CHIME recently got a significant boost in precision, in the form of CHIME Outriggers — three miniature versions of CHIME, each sited in different parts of North America. Together, the telescopes work as one continent-sized system that can focus in on any bright flash that CHIME detects, to pin down its location in the sky with extreme precision.

“Imagine we are in New York and there’s a firefly in Florida that is bright for a thousandth of a second, which is usually how quick FRBs are,” says MIT Kavli graduate student Shion Andrew. “Localizing an FRB to a specific part of its host galaxy is analogous to figuring out not just what tree the firefly came from, but which branch it’s sitting on.”

The new fast radio burst is the first detection made using the combination of CHIME and the completed CHIME Outriggers. Together, the telescope array identified the FRB and determined not only the specific galaxy, but also the region of the galaxy from where the burst originated. It appears that the burst arose from the edge of the galaxy, just outside of a star-forming region. The precise localization of the FRB is allowing scientists to study the environment around the signal for clues to what brews up such bursts.

“As we’re getting these much more precise looks at FRBs, we’re better able to see the diversity of environments they’re coming from,” says MIT physics postdoc Adam Lanman.

Lanman, Andrew, and Masui are members of the CHIME Collaboration — which includes scientists from multiple institutions around the world — and are authors of the new paper detailing the discovery of the new FRB detection.

An older edge

Each of CHIME’s Outrigger stations continuously monitors the same swath of sky as the parent CHIME array. Both CHIME and the Outriggers “listen” for radio flashes, at incredibly short, millisecond timescales. Even over several minutes, such precision monitoring can amount to a huge amount of data. If CHIME detects no FRB signal, the Outriggers automatically delete the last 40 seconds of data to make room for the next span of measurements.

On March 16, 2025, CHIME detected an ultrabright flash of radio emissions, which automatically triggered the CHIME Outriggers to record the data. Initially, the flash was so bright that astronomers were unsure whether it was an FRB or simply a terrestrial event caused, for instance, by a burst of cellular communications.

That notion was put to rest as the CHIME Outrigger telescopes focused in on the flash and pinned down its location to NGC4141 — a spiral galaxy in the constellation Ursa Major about 130 million light years away, which happens to be surprisingly close to our own Milky Way. The detection is one of the closest and brightest fast radio bursts detected to date.

Follow-up observations in the same region revealed that the burst came from the very edge of an active region of star formation. While it’s still a mystery as to what source could produce FRBs, scientists’ leading hypothesis points to magnetars — young neutron stars with extremely powerful magnetic fields that can spin out high-energy flares across the electromagnetic spectrum, including in the radio band. Physicists suspect that magnetars are found in the center of star formation regions, where the youngest, most active stars are forged. The location of the new FRB, just outside a star-forming region in its galaxy, may suggest that the source of the burst is a slightly older magnetar.

“These are mostly hints,” Masui says. “But the precise localization of this burst is letting us dive into the details of how old an FRB source could be. If it were right in the middle, it would only be thousands of years old — very young for a star. This one, being on the edge, may have had a little more time to bake.”

No repeats

In addition to pinpointing where the new FRB was in the sky, the scientists also looked back through CHIME data to see whether any similar flares occurred in the same region in the past. Since the first FRB was discovered in 2007, astronomers have detected over 4,000 radio flares. Most of these bursts are one-offs. But a few percent have been observed to repeat, flashing every so often. And an even smaller fraction of these repeaters flash in a pattern, like a rhythmic heartbeat, before flaring out. A central question surrounding fast radio bursts is whether repeaters and nonrepeaters come from different origins.

The scientists looked through CHIME’s six years of data and came up empty: This new FRB appears to be a one-off, at least in the last six years. The findings are particularly exciting, given the burst’s proximity. Because it is so close and so bright, scientists can probe the environment in and around the burst for clues to what might produce a nonrepeating FRB.

“Right now we’re in the middle of this story of whether repeating and nonrepeating FRBs are different. These observations are putting together bits and pieces of the puzzle,” Masui says.

“There’s evidence to suggest that not all FRB progenitors are the same,” Andrew adds. “We’re on track to localize hundreds of FRBs every year. The hope is that a larger sample of FRBs localized to their host environments can help reveal the full diversity of these populations.”

The construction of the CHIME Outriggers was funded by the Gordon and Betty Moore Foundation and the U.S. National Science Foundation. The construction of CHIME was funded by the Canada Foundation for Innovation and provinces of Quebec, Ontario, and British Columbia.

© Credit: Danielle Futselaar

A team of scientists, including physicists at MIT, have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major.

Physicians embrace AI note-taking technology

Health

Physicians embrace AI note-taking technology

Illustration to depict AI taking medical notes.

Ryan Jaslow

Mass General Brigham Communications

5 min read

‘There is literally no other intervention in our field that impacts burnout to this extent’ 

AI-driven scribes that record patient visits and draft clinical notes for physician review led to significant reductions in physician burnout and improvements in well-being, according to a Mass General Brigham study of two large healthcare systems.

The findings, published in JAMA Network Open, draw on surveys of more than 1,400 physicians and advanced practice providers at both Harvard-affiliated Mass General Brigham and Atlanta’s Emory Healthcare.

At MGB, use of ambient documentation technologies was associated with a 21.2 percent absolute reduction in burnout prevalence at 84 days, while Emory Healthcare saw a 30.7 percent absolute increase in documentation-related well-being at 60 days.

50% Physician burnout linked to maintaining electronic patient files

“Ambient documentation technology has been truly transformative in freeing up physicians from their keyboards to have more face-to-face interaction with their patients,” said study co-senior author Rebecca Mishuris, chief medical information officer at MGB, a faculty member at Harvard Medical School, and a primary care physician in the healthcare system. “Our physicians tell us that they have their nights and weekends back and have rediscovered their joy of practicing medicine. There is literally no other intervention in our field that impacts burnout to this extent.”

Physician burnout affects more than 50 percent of U.S. doctors and has been linked to time spent in electronic health records, particularly after hours. There is additional evidence that the burden and anticipation of needing to complete their appointment notes also contributes significantly to physician burnout.

“Burnout adversely impacts both providers and their patients who face greater risks to their safety and access to care,” said Lisa Rotenstein, a co-senior study author and director of The Center for Physician Experience and Practice Excellence at Brigham and Women’s Hospital. She is also an assistant clinical professor of medicine at the UCSF School of Medicine. “This is an issue that hospitals nationwide are looking to tackle, and ambient documentation provides a scalable technology worth further study.”

“Our physicians tell us that they have their nights and weekends back and have rediscovered their joy of practicing medicine.”

Rebecca Mishuris, Mass General Brigham
 

Qualitative feedback from users touted that ambient documentation enabled more “contact with patients and families,” improvements in their “joy in practice,” while recognizing its potential to “fundamentally [change] the experience of being a physician.” However, some users felt it added time to their note-writing or had less utility for certain visit types or medical specialties. Since the pilot studies began, the AI technologies have evolved as the vendors make changes based on user feedback and the large language models that power the technologies improve themselves through additional training, warranting continued study.

The researchers analyzed survey data from pilot users of ambient documentation technologies at two large health systems. At Mass General Brigham, 873 physicians and advanced practice providers were given surveys before enrolling, then after 42 and 84 days. About 30 percent of users responded to the surveys at 42 days, and 22 percent at 84 days. All 557 Emory pilot users were surveyed before the pilots and then at 60 days of use, with an 11 percent response rate. Researchers analyzed the survey results quantifying different measures of burnout at Mass General Brigham and physician well-being at Emory Healthcare.

The study authors added that given that these were pilot users and there were limited survey response rates, the findings likely represent the experience of more enthusiastic users, and more research is needed to track clinical use of ambient documentation across a broader group of providers.

Mass General Brigham’s ambient documentation program launched in July 2023 as a proof-of-concept pilot study involving 18 physicians. By July 2024, the pilot, which tested two different ambient documentation technologies, expanded to more than 800 providers. As of April 2025, the technologies have been made available to all Mass General Brigham physicians, with more than 3,000 providers routinely using the tools. Later this year, the program will look to expand to other healthcare professionals such as nurses, physical and occupational therapists, and speech-language pathologists.  

“Ambient documentation technology offers a step forward in healthcare and new tools that may positively impact our clinical teams,” said Jacqueline You, lead study author and a digital clinical lead and primary care associate physician at Mass General Brigham. “While stories of providers being able to call more patients or go home and play with their kids without having to worry about notes are powerful, we feel the burnout data speak similar volumes of the promise of these technologies, and importance of continuing to study them.”

Ambient documentation’s use will continue to be studied with surveys and other measures tracking burnout rates and time spent on clinical notes inside and outside of working hours. Researchers will evaluate whether burnout rates improve over time as the AI evolves, or if these burnout gains plateau or are reversed.


This project received financial support from the Physician’s Foundation and the National Library of Medicine of the National Institutes of Health.

Study links rising temperatures and declining moods

Rising global temperatures affect human activity in many ways. Now, a new study illuminates an important dimension of the problem: Very hot days are associated with more negative moods, as shown by a large-scale look at social media postings.

Overall, the study examines 1.2 billion social media posts from 157 countries over the span of a year. The research finds that when the temperature rises above 95 degrees Fahrenheit, or 35 degrees Celsius, expressed sentiments become about 25 percent more negative in lower-income countries and about 8 percent more negative in better-off countries. Extreme heat affects people emotionally, not just physically.

“Our study reveals that rising temperatures don’t just threaten physical health or economic productivity — they also affect how people feel, every day, all over the world,” says Siqi Zheng, a professor in MIT’s Department of Urban Studies and Planning (DUSP) and Center for Real Estate (CRE), and co-author of a new paper detailing the results. “This work opens up a new frontier in understanding how climate stress is shaping human well-being at a planetary scale.”

The paper, “Unequal Impacts of Rising Temperatures on Global Human Sentiment,” is published today in the journal One Earth. The authors are Jianghao Wang, of the Chinese Academy of Sciences; Nicolas Guetta-Jeanrenaud SM ’22, a graduate of MIT’s Technology and Policy Program (TPP) and Institute for Data, Systems, and Society; Juan Palacios, a visiting assistant professor at MIT’s Sustainable Urbanization Lab (SUL) and an assistant professor Maastricht University; Yichun Fan, of SUL and Duke University; Devika Kakkar, of Harvard University; Nick Obradovich, of SUL and the Laureate Institute for Brain Research in Tulsa; and Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability at CRE and DUSP. Zheng is also the faculty director of CRE and founded the Sustainable Urbanization Lab in 2019.

Social media as a window

To conduct the study, the researchers evaluated 1.2 billion posts from the social media platforms Twitter and Weibo, all of which appeared in 2019. They used a natural language processing technique called Bidirectional Encoder Representations from Transformers (BERT), to analyze 65 languages across the 157 countries in the study.

Each social media post was given a sentiment rating from 0.0 (for very negative posts) to 1.0 (for very positive posts). The posts were then aggregated geographically to 2,988 locations and evaluated in correlation with area weather. From this method, the researchers could then deduce the connection between extreme temperatures and expressed sentiment.

“Social media data provides us with an unprecedented window into human emotions across cultures and continents,” Wang says. “This approach allows us to measure emotional impacts of climate change at a scale that traditional surveys simply cannot achieve, giving us real-time insights into how temperature affects human sentiment worldwide.”

To assess the effects of temperatures on sentiment in higher-income and middle-to-lower-income settings, the scholars also used a World Bank cutoff level of gross national income per-capita annual income of $13,845, finding that in places with incomes below that, the effects of heat on mood were triple those found in economically more robust settings.

“Thanks to the global coverage of our data, we find that people in low- and middle-income countries experience sentiment declines from extreme heat that are three times greater than those in high-income countries,” Fan says. “This underscores the importance of incorporating adaptation into future climate impact projections.”

In the long run

Using long-term global climate models, and expecting some adaptation to heat, the researchers also produced a long-range estimate of the effects of extreme temperatures on sentiment by the year 2100. Extending the current findings to that time frame, they project a 2.3 percent worsening of people’s emotional well-being based on high temperatures alone by then — although that is a far-range projection.

“It's clear now, with our present study adding to findings from prior studies, that weather alters sentiment on a global scale,” Obradovich says. “And as weather and climates change, helping individuals become more resilient to shocks to their emotional states will be an important component of overall societal adaptation.”

The researchers note that there are many nuances to the subject, and room for continued research in this area. For one thing, social media users are not likely to be a perfectly representative portion of the population, with young children and the elderly almost certainly using social media less than other people. However, as the researchers observe in the paper, the very young and elderly are probably particularly vulnerable to heat shocks, making the response to hot weather possible even larger than their study can capture.

The research is part of the Global Sentiment project led by the MIT Sustainable Urbanization Lab, and the study’s dataset is publicly available. Zheng and other co-authors have previously investigated these dynamics using social media, although never before at this scale.

“We hope this resource helps researchers, policymakers, and communities better prepare for a warming world,” Zheng says.

The research was supported, in part, by Zheng’s chaired professorship research fund, and grants Wang received from the National Natural Science Foundation of China and the Chinese Academy of Sciences. 

© Image: MIT News; iStock

“It's clear now, with our present study adding to findings from prior studies, that weather alters sentiment on a global scale,” Nick Obradovich says.

Gone but not forgotten: brain’s map of the body remains unchanged after amputation

Emily Wheldon, tested before and after her arm amputation surgery

The findings, published today in Nature Neuroscience, have implications for the treatment of ‘phantom limb’ pain, but also suggest that controlling robotic replacement limbs via neural interfaces may be more straightforward than previously thought.

Studies have previously shown that within an area of the brain known as the somatosensory cortex there exists a map of the body, with different regions corresponding to different body parts. These maps are responsible for processing sensory information, such as touch, temperature and pain, as well as body position. For example, if you touch something hot with your hand, this will activate a particular region of the brain; if you stub your toe, a different region activates.

For decades now, the commonly-accepted view among neuroscientists has been that following amputation of a limb, neighbouring regions rearrange and essentially take over the area previously assigned to the now missing limb. This has relied on evidence from studies carried out after amputation, without comparing activity in the brain maps beforehand.

But this has presented a conundrum. Most amputees report phantom sensations, a feeling that the limb is still in place – this can also lead to sensations such as itching or pain in the missing limb. Also, brain imaging studies where amputees have been asked to ‘move’ their missing fingers have shown brain patterns resembling those of able-bodied individuals.

To investigate this contradiction, a team led by Professor Tamar Makin from the University of Cambridge and Dr Hunter Schone from the University of Pittsburgh followed three individuals due to undergo amputation of one of their hands. This is the first time a study has looked at the hand and face maps of individuals both before and after amputation. Most of the work was carried out while Professor Makin and Dr Schone were at UCL.

Prior to amputation, all three individuals were able to move all five digits of their hands. While lying in a functional magnetic resonance imaging (fMRI) scanner – which measures activity in the brain – the participants were asked to move their individual fingers and to purse their lips. The researchers used the brain scans to construct maps of the hand and lips for each individual. In these maps, the lips sit near to the hand.

The participants repeated the activity three months and again six months after amputation, this time asked to purse their lips and to imagine moving individual fingers. One participant was scanned again 18 months after amputation and a second participant five years after amputation.

The researchers examined the signals from the pre-amputation finger maps and compared them against the maps post-amputation. Analysis of the ‘before’ and ‘after’ images revealed a remarkable consistency: even with their hand now missing, the corresponding brain region activated in an almost identical manner.

Professor Makin, from the Medical Research Council Cognition and Brain Science Unit at the University of Cambridge, the study’s senior author, said: “Because of our previous work, we suspected that the brain maps would be largely unchanged, but the extent to which the map of the missing limb remained intact was jaw-dropping.

“Bearing in mind that the somatosensory cortex is responsible for interpreting what’s going on within the body, it seems astonishing that it doesn’t seem to know that the hand is no longer there.”

As previous studies had suggested that the body map reorganises such that neighbouring regions take over, the researchers looked at the region corresponding to the lips to see if it had moved or spread. They found that it remained unchanged and had not taken over the region representing the missing hand.

The study’s first author, Dr Schone from the Department of Physical Medicine and Rehabilitation, University of Pittsburgh, said: “We didn’t see any signs of the reorganisation that is supposed to happen according to the classical way of thinking. The brain maps remained static and unchanged.”

To complement their findings, the researchers compared their case studies to 26 participants who had had upper limbs amputated, on average 23.5 years beforehand. These individuals showed similar brain representations of the hand and lips to those in their three case studies, suggesting long-term evidence for the stability of hand and lip representations despite amputation.

Brain activity maps for the hand (shown in red) and lips (blue) before and after amputation

The researchers offer an explanation for the previous misunderstanding of what happens within the brain following amputation. They say that the boundaries within the brain maps are not clear cut – while the brain does have a map of the body, each part of the map doesn’t support one body part exclusively. So while inputs from the middle finger may largely activate one region, they also show some activity in the region representing the forefinger, for example. Previous studies that argue for massive reorganisation determined the layout of the maps by applying a ‘winner takes all’ strategy – stimulating the remaining body parts and noting which area of the brain shows most activity; because the missing limb is no longer there to be stimulated, activity from neighbouring limbs has been misinterpreted as taking over.

The findings have implications for the treatment of phantom limb pain, a phenomenon that can plague amputees. Current approaches focus on trying to restore representation of the limb in the brain’s map, but randomised controlled trials to test this approach have shown limited success – today’s study suggests this is because these approaches are focused on the wrong problem.

Dr Schone said: “The remaining parts of the nerves — still inside the residual limb — are no longer connected to their end-targets. They are dramatically cut off from the sensory receptors that have delivered them consistent signals. Without an end-target, the nerves can continue to grow to form a thickening of the nerve tissue and send noisy signals back to the brain.

“The most promising therapies involve rethinking how the amputation surgery is actually performed, for instance grafting the nerves into a new muscle or skin, so they have a new home to attach to.”

Of the three participants, one had substantial limb pain prior to amputation but received a complex procedure to graft the nerves to new muscle or skin; she no longer experiences pain. The other two participants, however, received the standard treatment and continue to experience phantom limb pain.

The University of Pittsburgh is one of a number of institutions that is researching whether movement and sensation can be restored to paralysed limbs or whether amputated limbs might be replaced by artificial, robotic limbs controlled by a brain interface. Today’s study suggests that because the brain maps are preserved, it should – in theory – be possible to restore movement to a paralysed limb or for the brain to control a prosthetic.

Dr Chris Baker from the Laboratory of Brain & Cognition, National Institutes of Mental Health, said: “If the brain rewired itself after amputation, these technologies would fail. If the area that had been responsible for controlling your hand was now responsible for your face, these implants just wouldn’t work. Our findings provide a real opportunity to develop these technologies now.”

Dr Schone added: “Now that we’ve shown these maps are stable, brain-computer interface technologies can operate under the assumption that the body map remains consistent over time. This allows us to move into the next frontier: accessing finer details of the hand map — like distinguishing the tip of the finger from the base — and restoring the rich, qualitative aspects of sensation, such as texture, shape, and temperature. This study is a powerful reminder that even after limb loss, the brain holds onto the body, waiting for us to reconnect.”

The research was supported by Wellcome, the National Institute of Mental Health, National Institutes of Health and Medical Research Council.

Reference

Schone, HR et al. Stable Cortical Body Maps Before and After Arm Amputation. Nature Neuroscience; 21 Aug 2025; DOI: 10.1038/s41593-025-02037-7

The brain holds a ‘map’ of the body that remains unchanged even after a limb has been amputated, contrary to the prevailing view that it rearranges itself to compensate for the loss, according to new research from scientists in the UK and US.

We suspected that the brain maps would be largely unchanged, but the extent to which the map of the missing limb remained intact was jaw-dropping
Tamar Makin
Emily Wheldon, tested before and after her arm amputation surgery

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Lincoln Laboratory reports on airborne threat mitigation for the NYC subway

A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.

Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."

Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.

A complex environment for testing

For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.

To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.

The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.

"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"

At times, issues such as power outages or database errors could disrupt data capture.

"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."

The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.

Calling on industry

Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.

The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.

"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.

The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.

"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.

"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."

Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.

Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.

© Photo: Glen Cooper

Lincoln Laboratory staff member Kevin Geisel places sampling equipment near the 42nd Street Shuttle at Grand Central Station to test airborne threat–mitigation strategies.

Lincoln Laboratory reports on airborne threat mitigation for the NYC subway

A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.

Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."

Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.

A complex environment for testing

For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.

To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.

The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.

"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"

At times, issues such as power outages or database errors could disrupt data capture.

"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."

The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.

Calling on industry

Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.

The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.

"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.

The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.

"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.

"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."

Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.

Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.

© Photo: Glen Cooper

Lincoln Laboratory staff member Kevin Geisel places sampling equipment near the 42nd Street Shuttle at Grand Central Station to test airborne threat–mitigation strategies.

‘ASEAN Way’ offers hope amid rising global tensions

From the wars in Gaza and Ukraine to protracted crises in Africa, geopolitical tensions abound in today’s turbulent world. But there is a bright spot closer to home.

Guided by the basic principles of non-interference, non-aggression, decision-making through consensus, and quiet diplomacy, the Association of Southeast Asian Nations (ASEAN) has remained a shining example of hope in the region, said ASEAN Secretary-General Dr Kao Kim Hourn.

“History has taught us that peace is not a natural state of affairs – it is sustained by restraint, dialogue, diplomacy, and a shared commitment to order,” noted Dr Kao at a lecture organised by the NUS Centre for International Law (CIL) on 4 August 2025 that was attended by about 130 participants, including policymakers, diplomats, government officials as well as members of academia and the private sector from Singapore and the region.

“ASEAN’s journey stands as a profound testament to the tenacity required to make regionalism and multilateralism work.”

ASEAN is an area of focus for CIL, particularly in its role as a pillar of international law, said CIL Director, Dr Nilufer Oral. In her welcome remarks at the annual CIL-NUS ASEAN Distinguished Lecture, she hailed ASEAN as “an important building block in the international legal system”.

Echoing this, Dr Kao added: “Even in times of difficulty, we do not abandon our faith in the power of diplomacy, a culture of dialogue, and a sense of shared purpose. That is the ‘ASEAN Way’, and it remains a source of light in an increasingly uncertain and polarised world.”

His lecture, titled “ASEAN: A Bright Spot in a Darkening World”, focused on the continued relevance of the bloc, which was founded in 1967. But the brightness can dim occasionally.

“The recent flare-up along the Cambodia-Thailand border should serve as a sobering wake-up call…we can neither afford complacency, nor take peace for granted,” stressed the Cambodian diplomat, who has been ASEAN’s Secretary-General since 2023.

Clashes between the two countries in July de-escalated after an intervention by neighbouring leader Mr Anwar Ibrahim, Prime Minister of Malaysia, who is the current ASEAN Chair.

Unity amid uncertainty

Dr Kao emphasised ASEAN’s commitment to future-proofing the region, as set out in the “ASEAN 2045” agenda adopted at the 46th ASEAN Summit in May 2025.

“For the first time in ASEAN’s history, we have articulated a 20-year outlook that anticipates the global megatrends already reshaping the international system,” he said, highlighting the challenges of climate change, demographic shifts, and technological disruptions.

“‘ASEAN 2045’ seeks not just to respond to them, but also to harness them in shaping a dynamic, inclusive, and future-ready region.”

Other challenges threatening the rules-based world order include the rising tides of unilateralism, fragmentation, and protectionism, with Dr Kao calling for concerted efforts and vigilance to sustain this relationship of trust.

Such efforts are especially crucial given the outsized impact of external forces on the region.

For example, Dr Kao acknowledged that US President Donald Trump’s support was crucial to securing peace in the Indo-Pacific. But he said ASEAN had to make known to the US that its recent tariffs had caused “a lot of uncertainty” in the region, especially in the private sector.

“ASEAN has been trying to respond collectively as a region to the US,” said Dr Kao in his reply to a question on the rules-based order, adding that the bloc is also working to boost trade within the region and with partners like China, Korea, and India.

A beacon in a darkening world

Amid geopolitical tensions, ASEAN’s growing appeal is evident as more non-Southeast Asian countries join ASEAN-led partnerships. It has not only championed peace, but also “reinforced its position as a cornerstone of regional economic success”, said Dr Kao.

Currently the world’s fifth-largest economy, ASEAN is projected to become the fourth-largest by 2030. In 2023, it secured a record US$230 billion in foreign direct investment.

Asked if ASEAN was in need of reform, Dr Kao conceded the bloc had its “shortcomings” such as not delivering results quickly enough, and was working to address them. On a separate note, he highlighted that ASEAN was making inroads in AI-related issues ranging from governance to ethics, and was gaining “a lot of momentum” in cybersecurity.

Weighing in, Mr Ong Keng Yong, Singapore’s Ambassador-at-Large and current CIL Governing Board Member, who moderated the dialogue, added that countries needed more political will to enforce cybersecurity-related laws.

Participants believed the topics discussed were timely. “The lecture was an accurate reflection of the processes and the recent priorities of ASEAN,” said Ms Diane Shayne D. Lipana, Acting Director of ASEAN Affairs at the Philippines’ Department of Foreign Affairs.

In these uncertain times, ASEAN’s role is starker than ever. “We will redouble our efforts to ensure that ASEAN remains – steadily and resolutely – a bright beacon in a darkening world,” Dr Kao added.

Learning from punishment

From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. 

“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.

© Photo: iStock

McGovern Institute researchers show that the same punishment can either build respect for authority or deepen distrust — depending on what people already believe.

Learning from punishment

From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. 

“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.

© Photo: iStock

McGovern Institute researchers show that the same punishment can either build respect for authority or deepen distrust — depending on what people already believe.

A boost for the precision of genome editing

The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.

CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.

Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).

“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”

The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.

LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.

Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.

The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.

© Image: Ernesto del Aguila III/National Human Genome Research Institute

A new tool developed by MIT researchers represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

A boost for the precision of genome editing

The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.

CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.

Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).

“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”

The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.

LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.

Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.

The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.

© Image: Ernesto del Aguila III/National Human Genome Research Institute

A new tool developed by MIT researchers represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

Materials Research Laboratory: Driving interdisciplinary materials research at MIT

Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).

At the center of these efforts is the Materials Research Laboratory (MRL), a hub that connects and supports the Institute’s materials research community. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, who became director in April 2025. “Our goal is to make it easier for our faculty to conduct their extraordinary research,” adds Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering.

A storied history

Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.

Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.

Enabling research through partnership and support

MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.

Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.

Behind-the-scenes support, front-line impact

MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.

This quiet but powerful support spans multiple areas:

  • The finance team manages grants and helps secure new funding opportunities.
  • The human resources team supports the hiring of postdocs.
  • The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
  • The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.

Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.

Leadership with a vision

Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT. 

“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.

© Photo: Gretchen Ertl

MIT’s Great Dome rises against the Boston skyline behind the Vannevar Bush Building. Also known as Building 13, the structure houses the offices and many of the facilities of MIT’s Materials Research Laboratory.

Materials Research Laboratory: Driving interdisciplinary materials research at MIT

Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).

At the center of these efforts is the Materials Research Laboratory (MRL), a hub that connects and supports the Institute’s materials research community. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, who became director in April 2025. “Our goal is to make it easier for our faculty to conduct their extraordinary research,” adds Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering.

A storied history

Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.

Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.

Enabling research through partnership and support

MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.

Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.

Behind-the-scenes support, front-line impact

MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.

This quiet but powerful support spans multiple areas:

  • The finance team manages grants and helps secure new funding opportunities.
  • The human resources team supports the hiring of postdocs.
  • The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
  • The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.

Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.

Leadership with a vision

Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT. 

“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.

© Photo: Gretchen Ertl

MIT’s Great Dome rises against the Boston skyline behind the Vannevar Bush Building. Also known as Building 13, the structure houses the offices and many of the facilities of MIT’s Materials Research Laboratory.

How to reverse nation’s declining birth rate

Doctor holding newborn.

Getty Images

Health

How to reverse nation’s declining birth rate

Health experts urge policies that buoy families: lower living costs, affordable childcare, help for older parents who want more kids

Alvin Powell

Harvard Staff Writer

5 min read

Financial-incentive programs for prospective parents don’t work as a way to reverse falling birth rates, Harvard health experts said on Tuesday about a policy option that has been in the news in recent months.

Instead, they said, a more effective approach would be to target issues that make parenting difficult: the high cost of living, a lack of affordable childcare, and better options for older parents who still want to see their families grow.

The discussion, held at The Studio at Harvard T.H. Chan School of Public Health, came in the wake of a July report from the Centers of Disease Control and Prevention that showed that the U.S. fertility rate was down 22 percent since the last peak in 2007.

Ana Langer, professor of the practice of public health, emerita, said the causes of fertility decline are numerous, complex, and difficult to reverse.

Surveys investigating why people might not want children cite things such as the cost of living, negative medical experiences from previous pregnancies, and wariness about major global issues such as climate change. In fact, she said, many survey respondents are surprised that declining fertility is even a problem and say they’re more concerned about overpopulation and its impacts on the planet.

The landscape is complicated by the fact that U.S. society has changed significantly since the 1960s, when expectations were that virtually everyone wanted to raise a family. Today, she said, people feel free to focus on careers rather than families, and there is far greater acceptance of those who decide never to have children.

Margaret Anne McConnell, the Chan School’s Bruce A. Beal, Robert L. Beal and Alexander S. Beal Professor of Global Health Economics, said some of the factors that have contributed to the declining birth rate reflect positive cultural shifts.

Fertility rates are falling fastest, for example, in the youngest demographic, girls age 15 to 20. Teen pregnancy has been long considered a societal ill and is associated with difficult pregnancies, poor infant health, interrupted education, and poor job prospects.

Other factors include the widespread availability of birth control, which gives women more reproductive choice, as well as the increasing share of women in higher education and the workforce.

Today people feel free to focus on careers rather than families, and there is far greater acceptance of those who decide never to have children.

Margaret Anne McConnell.
Margaret Anne McConnell

McConnell said some stop short of having the number of children they desire, due to fertility, medical, and other issues. One way to address declining fertility, she said, would be to find ways to enable those parents to have the number of children they wish.

“Any time we see people being able to make fertility choices that suit their family, I think that’s a success,” McConnell said. “I think people choosing to have children later in life is also a success. … To the extent that we can make it possible for people to reach whatever their desired family size is, I think that that would be a societal priority.”

The event, “America’s declining birth rate: A public health perspective,” brought together Langer, McConnell, and Henning Tiemeier, the Chan School’s Sumner and Esther Feldberg Professor of Maternal and Child Health.

Addressing the declining birth rate has become a focus of the current administration — President Trump has floated the idea of a $5,000 “baby bonus” and $1,000 “Trump Accounts” that were part of the “One Big Beautiful Bill” approved this summer.

Panelists at the virtual event pointed out that a declining birth rate is not just a problem in the U.S. It has been declining in many countries around the world, and for many of the same reasons. As people — particularly women — become better educated and wealthier, they tend to choose smaller families than their parents and grandparents.

Tiemeier said that changing societies and cultures have altered the very nature of relationships between men and women. He added sex education to the list of key changes that have fueled the birth-rate decline, particularly for teen pregnancies. The question of whether declining fertility is a problem is too simple for such a complex issue, he said.

In a country with a growing population, where women have, on average, three children, the birth rate falling to 2½, slightly over the replacement value, would be beneficial economically, ensuring more workers to support the population as it ages.

Countries with a birth rate below 1, whose population is already contracting, risk too few workers to fuel their economy, not to mention the social and societal impacts of a lack of young people.

Tiemeier and McConnell said that other countries have tried simply paying people to have more children, and it doesn’t work. Even if the declining birth rate was considered a catastrophe, McConnell said, governments haven’t yet found levers that can bring it back up.

That doesn’t mean there aren’t things government can do to help parents navigate a difficult and expensive time in life. Programs to lower the cost of childcare have been instituted in some cities and states, and more can be done.

Tiemeier said both Republicans and Democrats are interested in supporting families, though their approaches may be different. So this may be a rare issue on which they could find common ground.

Other areas of associatedneed include maternal health — a significant part of the population lives in healthcare “deserts” far from medical care. Programs designed to reach those areas, as well as a national parental-leave policy, would help young families navigate that time.

“Any measure that we take will have a modest effect, because there are so many things contributing to this,” Tiemeier said. “To say that we are waiting and looking for a measure that has a big effect is an illusion. There are no big effects in this discussion.”

Dr. Robot will see you now?

Health

Dr. Robot will see you now?

Pierre E. Dupont.

Pierre E. Dupont holds a transcatheter valve repair device with a motorized catheter drive system, replacing the traditional manual handle.

Niles Singer/Harvard Staff Photographer

Alvin Powell

Harvard Staff Writer

8 min read

Medical robotics expert says coming autonomous devices will augment skills of clinicians (not replace them), extend reach of cutting-edge procedures

The robot doctor will see you now? Not for the foreseeable future, anyway.

Medical robots today are pretty dumb, typically acting as extensions of a surgeon’s hands rather than taking over for them. Pierre E. Dupont, professor of surgery at Harvard Medical School, co-authored a Viewpoint article in the journal Science Robotics last month saying that autonomous surgical robots that learn as they go are on the way.

But their likely impact will be to augment the skills of clinicians, not replace them, and to extend the reach of cutting-edge advances beyond the urban campuses of academic medical centers where they typically emerge.

In this edited conversation, Dupont, who is also chief of pediatric cardiac bioengineering at Boston Children’s Hospital, spoke with the Gazette about the areas most likely to see surgical robots operating autonomously, and some of the hurdles to their adoption.


You note that robot autonomy and learning system technologies are being used in manufacturing as well as medical settings. How does that work?

Yes, in just about every other field, robots are used as autonomous agents to replace the manpower that would be needed to perform a task. But in many surgical applications, like laparoscopy, they’re used as extensions of the clinician’s hand. They improve ergonomics for the clinician, but there’s still some question as to how much they’re improving the experience for the patient.

Outside of medicine, teleoperation, in which the operator uses a mechanical input device to directly control robot motion, is only used in remote or hostile environments like space or the ocean floor. But it’s how laparoscopic robots are controlled.

The hot extension today, which ties into hospital economics, is telesurgery, where you might have a Boston-based hospital and satellite facilities in the suburbs. Rather than the clinician being with the patient in the operating room, you would have robots at the satellite hospitals, and the clinician could stay at the main hospital and connect remotely to perform procedures. That’s trending today, but it’s not automation.

What would an automated procedure look like?

Some simpler medical procedures are already automated using non-learning methods.

In joint replacement, for example, you need to create a cavity in the bone to place an implant. Historically, the skill of the clinician determined how well the implant fit and whether the joint alignment was appropriate.

But there’s a strong parallel with machining processes, which was the impetus for creating robots to mill cavities in the bone — leading to more accurate and consistent outcomes. That’s a big market today in orthopedics.

The autonomy of the milling robot is possible because it’s a well-defined problem and easy to model. You create a 3D model of the bones and a clinician can sit at a computer interface and use software to define exactly how the implant will be aligned and how much bone will be removed. So everything can be modeled and preplanned — the robot is basically just following the plan. It’s a dumb form of automation.

“Rather than the clinician being with the patient in the operating room, you would have robots at the satellite hospitals, and the clinician could stay at the main hospital and connect remotely to perform procedures. ”

Pierre E. Dupont.
Pierre E. Dupont

That’s because of the nature of the bone and the implant. The dimensions are known. Nothing’s moving like it would if you were operating on a beating heart.

That’s right, although I think transcatheter cardiac procedures and endovascular procedures in general are actually great targets for automation.

The geometry is not as well-defined as orthopedic surgery, but it’s much simpler than in laparoscopy or any type of open surgery where you’re dealing with soft tissue.

In soft tissue surgery, you’re using forceps, scalpel, and suture to grasp, cut, and sew tissue. The clinician, through experience, has a model in their head of how hard they can squeeze the tissue without damaging it, how the tissue will deform when they pull on it and cut it, and how deeply they have to place the needle while suturing.

Those things are much harder to model with classical engineering techniques than milling bone.

How much of the progress in this area is due to the speed of technological development versus acceptance among clinicians and patients?

If you just think about robotics, the amount of acceptance is surprising. A lot of academic clinicians love to play with new toys. Many patients, perhaps incorrectly, assume that the clinician must do a better job with this incredible piece of equipment.

Hospitals want to know about costs. They don’t necessarily care if the clinician’s back is a little less sore at the end of the day because they used a robot. They want to know whether the patient had fewer complications and was discharged sooner — in other words, better care for less money. That’s the tough aspect of this: Robots cost more to make and roll out than most other medical equipment.

When you talk about the acceptance of medical robot automation, clinicians may be a little reluctant because they wonder whether they are going to lose their jobs. But it’s actually like giving them a highly effective tool that can raise their skill level.

There are a lot of clinicians who may only see a particular procedure 10 times a year. If you think of anything that’s complex in life that you only do once a month, you’re not going to do that as well and feel as confident as if you did it every day.

So, if the robot is not replacing them, but acting like a highly experienced colleague whom you can communicate with, and who can coach you through the procedure, explaining, “Now I’m going to do this.” Or ask, “Do you think I should do it this way?” or “Should I put this device a little to the left?” I think there’ll be acceptance. If you have a system that can bend a clinician’s learning curve down and raise their proficiency level very quickly, every clinician will want one.

How important are recent advances in large language models and other forms of AI in the discussion of autonomy?

These advances are what is going to enable progress in medical robot autonomy. We’re working on transcatheter valve repair procedures that right now are done by hand. Clinicians need to do a lot of these procedures to get good at them — and to stay that way.

We have seen in my lab that adding a robotic teleoperation makes them easier. But if we can add learning-based autonomous functionality, we could make it possible for these procedures to be safely offered in low-volume facilities.

That’s important because a significant concern is that you get the best care and the newest treatments in the big urban areas that have academic medical centers. But many people don’t live in those areas and even though they could travel to get treatment, they want to get treated locally.

So, if you can enable community hospitals to offer these services, even though they’re low-volume, that’s an opportunity for a much larger fraction of the population to take advantage of the best medical care.

When we look further out, do you have any doubt that medicine will become more autonomous?

I think there’s a lot of opportunity for increasing levels of autonomy, but it has to be done gradually. You want to make sure that you’re regulating it so that patients are always safe.

There will be unanticipated events, such as unusual anatomical variations, that the system hasn’t been trained for. You need to make sure that the system will catch these problems as they come up — it needs to recognize when it’s out of its depth.

Currently, that’s a research topic in learning systems — there is technology that still needs to be developed. But the revolution over the last few years in foundation models has shown us how much is possible.

Ultimately, will there be a case where there’s no clinician involved? We don’t have to worry about that question yet.

You mentioned that these systems are expensive. Will costs come down the more they’re used?

The challenge is that medical devices are designed and approved for specific procedures. If you want to create a new medical device, you need to look at how many procedures are performed per year, and what the reimbursements are for those procedures.

For any medical device — not a robot — the smallest realistic market size is $100 million in sales per year. And if you want to raise venture capital funding, the market has to be at least a billion dollars.

Since medical robots are so expensive to develop, that means you should have a multibillion-dollar market for a medical robot. Those markets do exist: Laparoscopy and orthopedics are current examples. Endovascular procedures including heart valve repair and replacement are another that I am targeting.

An important factor for each of these three examples is that the robot is a platform. It can be used for a variety of procedures and so has a much larger addressable market than a robot that can only do one thing.

New laser “comb” can enable rapid identification of chemicals with extreme precision

Optical frequency combs are specially designed lasers that act like rulers to accurately and rapidly measure specific frequencies of light. They can be used to detect and identify chemicals and pollutants with extremely high precision.

Frequency combs would be ideal for remote sensors or portable spectrometers because they can enable accurate, real-time monitoring of multiple chemicals without complex moving parts or external equipment.

But developing frequency combs with high enough bandwidth for these applications has been a challenge. Often, researchers must add bulky components that limit scalability and performance.

Now, a team of MIT researchers has demonstrated a compact, fully integrated device that uses a carefully crafted mirror to generate a stable frequency comb with very broad bandwidth. The mirror they developed, along with an on-chip measurement platform, offers the scalability and flexibility needed for mass-producible remote sensors and portable spectrometers. This development could enable more accurate environmental monitors that can identify multiple harmful chemicals from trace gases in the atmosphere.

“The broader the bandwidth a spectrometer has, the more powerful it is, but dispersion is in the way. Here we took the hardest problem that limits bandwidth and made it the centerpiece of our study, addressing every step to ensure robust frequency comb operation,” says Qing Hu, Distinguished Professor in Electrical Engineering and Computer Science at MIT, principal investigator in the Research Laboratory of Electronics, and senior author on an open-access paper describing the work.

He is joined on the paper by lead author Tianyi Zeng PhD ’23; as well as Yamac Dikmelik of General Dynamics Mission Systems; Feng Xie and Kevin Lascola of Thorlabs Quantum Electronics; and David Burghoff SM ’09, PhD ’14, an assistant professor at the University of Texas at Austin. The research appears today in Light: Science and Applications.

Broadband combs

An optical frequency comb produces a spectrum of equally spaced laser lines, which resemble the teeth of a comb.

Scientists can generate frequency combs using several types of lasers for different wavelengths. By using a laser that produces long wave infrared radiation, such as a quantum cascade laser, they can use frequency combs for high-resolution sensing and spectroscopy.

In dual-comb spectroscopy (DCS), the beam of one frequency comb travels straight through the system and strikes a detector at the other end. The beam of the second frequency comb passes through a chemical sample before striking the same detector. Using the results from both combs, scientists can faithfully replicate the chemical features of the sample at much lower frequencies, where signals can be easily analyzed.

The frequency combs must have high bandwidth, or they will only be able to detect a small frequency range of chemical compounds, which could lead to false alarms or inaccurate results.

Dispersion is the most important factor that limits a frequency comb’s bandwidth. If there is dispersion, the laser lines are not evenly spaced, which is incompatible with the formation of frequency combs.

“With long wave infrared radiation, the dispersion will be very high. There is no way to get around it, so we have to find a way to compensate for it or counteract it by engineering our system,” Hu says.

Many existing approaches aren’t flexible enough to be used in different scenarios or don’t enable high enough bandwidth.

Hu’s group previously solved this problem in a different type of frequency comb, one that used terahertz waves, by developing a double-chirped mirror (DCM).

A DCM is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other. They found that this DCM, which has a corrugated structure, could effectively compensate for dispersion when used with a terahertz laser.

“We tried to borrow this trick and apply it to an infrared comb, but we ran into lots of challenges,” Hu says.

Because infrared waves are 10 times shorter than terahertz waves, fabricating the new mirror required an extreme level of precision. At the same time, they needed to coat the entire DCM in a thick layer of gold to remove the heat under laser operation. Plus, their dispersion measurement system, designed for terahertz waves, wouldn’t work with infrared waves, which have frequencies that are about 10 times higher than terahertz.

“After more than two years of trying to implement this scheme, we reached a dead end,” Hu says.

A new solution

Ready to throw in the towel, the team realized something they had missed. They had designed the mirror with corrugation to compensate for the lossy terahertz laser, but infrared radiation sources aren’t as lossy.

This meant they could use a standard DCM design to compensate for dispersion, which is compatible with infrared radiation. However, they still needed to create curved mirror layers to capture the beam of the laser, which made fabrication much more difficult than usual.

“The adjacent layers of mirror differ only by tens of nanometers. That level of precision precludes standard photolithography techniques. On top of that, we still had to etch very deeply into the notoriously stubborn material stacks. Achieving those critical dimensions and etch depths was key to unlocking broadband comb performance,” Zeng says. In addition to precisely fabricating the DCM, they integrated the mirror directly onto the laser, making the device extremely compact. The team also developed a high-resolution, on-chip dispersion measurement platform that doesn’t require bulky external equipment.

“Our approach is flexible. As long as we can use our platform to measure the dispersion, we can design and fabricate a DCM that compensates for it,” Hu adds.

Taken together, the DCM and on-chip measurement platform enabled the team to generate stable infrared laser frequency combs that had far greater bandwidth than can usually be achieved without a DCM.

In the future, the researchers want to extend their approach to other laser platforms that could generate combs with even greater bandwidth and higher power for more demanding applications.

“These researchers developed an ingenious nanophotonic dispersion compensation scheme based on an integrated air–dielectric double-chirped mirror. This approach provides unprecedented control over dispersion, enabling broadband comb formation at room temperature in the long-wave infrared. Their work opens the door to practical, chip-scale frequency combs for applications ranging from chemical sensing to free-space communications,” says Jacob B. Khurgin, a professor at the Johns Hopkins University Whiting School of Engineering, who was not involved with this paper.

This work is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the Gordon and Betty Moore Foundation. This work was carried out, in part, using facilities at MIT.nano.

© Image: Courtesy of the researchers

The comb uses a double-chirped mirror (DCM), pictured, which is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other.

Artificial heart valve found to be safe following long-term test in animals

SEBS polymer artificial heart valve prototype

A research team, led by the Universities of Bristol and Cambridge, demonstrated that the polymer material used to make the artificial heart valve is safe following a six-month test in sheep.

Currently, the 1.5 million patients who need heart valve replacements each year face trade-offs. Mechanical heart valves are durable but require lifelong blood thinners due to a high risk of blood clots, whereas biological valves, made from animal tissue, typically last between eight to 10 years before needing replacement.

The artificial heart valve developed by the researchers is made from SEBS (styrene-block-ethylene/butyleneblock-styrene) – a type of plastic that has excellent durability but does not require blood thinners – and potentially offers the best of both worlds. However, further testing is required before it can be tested in humans.

In their study, published in the European Journal of Cardio-Thoracic Surgery, the researchers tested a prototype SEBS heart valve in a preclinical sheep model that mimicked how these valves might perform in humans.

The animals were monitored over six months to examine potential long-term safety issues associated with the plastic material. At the end of the study, the researchers found no evidence of harmful calcification (mineral buildup) or material deterioration, blood clotting or signs of cell toxicity. Animal health, wellbeing, blood tests and weight were all stable and normal, and the prototype valve functioned well throughout the testing period, with no need for blood thinners.

“More than 35 million patients’ heart valves are permanently damaged by rheumatic fever, and with an ageing population, this figure is predicted to increase four to five times by 2050,” said Professor Raimondo Ascione from the University of Bristol, the study’s clinical lead. “Our findings could mark the beginning of a new era for artificial heart valves: one that may offer safer, more durable and more patient-friendly options for patients of all ages, with fewer compromises.”

“We are pleased that the new plastic material has been shown to be safe after six months of testing in vivo,” said Professor Geoff Moggridge from Cambridge’s Department of Chemical Engineering and Biotechnology, biomaterial lead on the project. “Confirming the safety of the material has been an essential and reassuring step for us, and a green light to progress the new heart valve replacement toward bedside testing.”

The results suggest that artificial heart valves made from SEBS are both durable and do not require the lifelong use of blood thinners.

While the research is still early-stage, the findings help clear a path to future human testing. The next step will be to develop a clinical-grade version of the SEBS polymer heart valve and test it in a larger preclinical trial before seeking approval for a pilot human clinical trial.

The study was funded by a British Heart Foundation (BHF) grant and supported by a National Institute for Health and Care Research (NIHR) Invention for Innovation (i4i) programme Product Development Awards (PDA) award. Geoff Moggridge is a Fellow of King's College, Cambridge. 

Reference:
Raimondo Ascione et al. ‘Material safety of styrene-block-ethylene/butylene-block-styrene copolymers used for cardiac valves: 6-month in-vivo results from a juvenile sheep model’. European Journal of Cardio-Thoracic Surgery (2025). DOI: 10.1093/ejcts/ezaf266/ejcts-2025-100426

Adapted from a University of Bristol media release. 

An artificial heart valve made from a new type of plastic could be a step closer to use in humans, following a successful long-term safety test in animals.

SEBS polymer artificial heart valve prototype

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Startups to receive support in new programme

A mosaic of black and white head images of all those taking part in the SPARK 1.0 incubator

Created by King's E-Lab, in partnership with Founders at the University of Cambridge, SPARK will act as an entrepreneurial launchpad. This programme will offer hands-on support, world-class mentorship and practical training to enable world-changing ventures covering challenges such as disease prevention and treatment, fertility support and climate resilience. The combined networks of successful entrepreneurs, investor alumni and venture-building expertise brought by King’s E-Lab and Founders at the University of Cambridge will address a critical gap to drive innovation.

More than 180 applications were received for SPARK 1.0, reflecting strong demand for early incubation support. Of the selected companies, focused on AI, machine learning, biotechnology and impact, 42% of the companies are at idea stage, 40% have an early-stage product, and 17% have early users. Around half of the selected companies are led by women.

  • Ashgold Africa - An edtech business building solar projects to provide sustainable energy in rural Kenya.
  • Aizen Software - Credit referencing fintech working on financial inclusion.
  • Atera Analytics - Optimising resources around the EV energy infrastructure ecosystem.
  • Cambridge Mobilytics - Harnessing data from UK EV charging stations to aid decision-making in the e-mobility sector.  
  • Dielectrix - Building next-gen semiconductor dielectric materials for electronics using 2D materials.
  • Dulce Cerebrum - Building AI models to detect psychosis from blood tests.
  • GreenHarvest - Data-driven agritech firm using satellite and climate data to predict changing crop yield migration.
  • Heartly - Offering affordable, personalised guidance on preventing cardiovascular disease.
  • Human Experience Dynamics - Combining patient experiences and physiological measures to create holistic insight in psychiatric trials.
  • iFlame - Agentic AI system to help build creative product action plans.
  • IntolerSense - Uncovering undiscovered food intolerances using an AI-powered app.
  • Med Arcade - AI-powered co-pilot to help GPs interact with patient data.
  • MENRVA - AI-powered matchmaking engine for the art world, connecting galleries, buyers and art businesses.
  • Myta Bio - leverages biomimetic science to create superior industrial chemicals from natural ingredients.
  • Neela Biotech - Creating carbon-negative jet fuel.
  • Egg Advisor - Digital platform offering expert advice to women seeking to freeze their eggs.
  • Polytecks - Wearable tech firm building e-textiles capable of detecting valvular heart diseases.
  • RetroAnalytica - Using AI to decarbonise buildings by predicting energy inefficiencies.
  • SafeTide - Using ‘supramolecular’ technology to keep delicate medicines stable at room temperature for longer periods.
  • The Surpluss - Climate tech company identifying unused resources in businesses and redistributing them.
  • Yacson Therapeutics - Using ML to find plant-based therapeutics to help combat inflammatory bowel disease.
  • Zenithon AI - Using AI and ML to help advance the development of nuclear fusion energy.

The intensive incubator will run for four weeks from the end of August. Each participant will receive specialised support from Founders at the University of Cambridge and King’s E-Lab mentors and entrepreneurs-in-residence to turn their concepts into companies that can attract both investment and ultimately grow into startups capable of driving economic growth.

Following the program, the founders will emerge with:

A validated business model and a clear pathway to product development

Access to expert mentorship and masterclasses with global entrepreneurs and investors

The opportunity to pitch for £20,000 investment and chance to pitch for further investment from established Angel Investors at Demo Day

A chance to join a thriving community of innovators and change-makers

Kamiar Mohaddes, co-founder and Director of King’s Entrepreneurship Lab, said: “Cambridge has been responsible for many world-changing discoveries, but entrepreneurship isn't the first thought of most people studying here. Driving economic growth requires inspiring the next generation to think boldly about how their ideas can shape industries and society. We want SPARK to be a catalyst, showing students the reality of founding a company. We look forward to seeing this cohort turn their ambitions into ventures that contribute meaningfully to the economy.”

Gerard Grech, Managing Director at Founders at the University of Cambridge, said: “Cambridge is aiming to double its tech and science output in the next decade – matching what it achieved in the past 20 years. That ambition starts at the grassroots. The energy from the students, postgraduates and alumni is clear, and with tech contributing £159 billion to the UK economy and 3 million jobs, building transformative businesses is one of the most powerful ways to make an impact. This SPARK 1.0 cohort is beginning that journey, and we’re pleased to partner with King’s Entrepreneurship Lab to support them.”

Gillian Tett, Provost of King’s College, said: “Cambridge colleges have more talent in AI, life sciences and technology, including quantum computing, than ever. Through SPARK, we can support even more students, researchers and alumni to turn their ambition into an investable idea and make the leap from the lab to the marketplace. This isn’t just a game-changer for King’s, but for every college in Cambridge whose students join this programme and journey with us to make an impact from Cambridge, on the world.”

Jim Glasheen, Chief Executive of Cambridge Enterprise, said: “The SPARK 1.0 cohort highlights the breadth and depth of innovation within collegiate Cambridge. SPARK, and the partnership between King’s College and Founders at the University of Cambridge, is a testament to our shared commitment to nurture and empower Cambridge innovators who will tackle global challenges and contribute to economic growth.”

The programme is free for students graduating in Summer 2025, postgraduates, post-docs, researchers, and alumni who have graduated within the last two years. This is made possible through the University of Cambridge, as well as a generous personal donation from Malcolm McKenzie, King’s alumnus and Chair of the E-Lab’s Senior Advisory Board.

King’s Entrepeneurship Lab (King’s E-Lab) and Founders at the University of Cambridge have revealed the 24 startups that will join King’s College’s first-ever incubator programme designed to turn research-backed ideas from University of Cambridge students and alumni into investable companies.

We look forward to seeing this cohort turn their ambitions into ventures that contribute meaningfully to the economy
Kamiar Mohaddes
A mosaic of black and white head images of all those taking part in the SPARK 1.0 incubator

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Cambridge to host cutting-edge total-body PET scanner as part of nationwide imaging platform

Siemens Healthineers Biograph Vision Quadra Total-Body PET Scanner

The scanner, funded through a £5.5m investment from the UKRI Medical Research Council (MRC), will form part of the National PET Imaging Platform (NPIP), the UK’s first-of-its-kind national total-body PET imaging platform for drug discovery and clinical research.

Positron emission tomography (PET) is a powerful technology for imaging living tissues and organs down to the molecular level in humans. It can be used to investigate how diseases arise and progress and to detect and diagnose diseases at an early stage.

Total-body PET scanners are more sensitive than existing technology and so can provide new insights into anatomy that have never been seen before, improving detection, diagnosis and treatment of complex, multi-organ diseases.

Current PET technology is less sensitive and requires the patient to be repositioned multiple times to achieve a full-body field of view. Total-body PET scans can achieve this in one session and are quicker, exposing patients to considerably lower doses of radiation. This means more patients, including children, can participate in clinical research and trials to improve our understanding of diseases.

ANGLIA network of universities and NHS trusts

Supplied by Siemens Healthineers, the scanner will also be the focus of the ANGLIA network, comprising three universities, each paired with one or more local NHS trusts: the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust; UCL and University College London Hospitals NHS Foundation Trust; and the University of Sheffield with Sheffield Teaching Hospitals NHS Foundation Trust.

The network, supported by UKRI, is partnered with biotech company Altos Labs and pharmaceutical company AstraZeneca, both with R&D headquarters in Cambridge, and Alliance Medical, a leading provider of diagnostic imaging.

Franklin Aigbirhio, Professor of Molecular Imaging Chemistry at the University of Cambridge, will lead the ANGLIA network. He said: “This is an exciting new technology that will transform our ability to answer important questions about how diseases arise and to search for and develop new treatments that will ultimately benefit not just our patients, but those across the UK and beyond.

“But this is more than just a research tool. It will also help us diagnose and treat diseases at an even earlier stage, particularly in children, for whom repeated investigations using standard PET scanners were not an option.”

The scanner will be located in Addenbrooke’s Hospital, Cambridge, supported by the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre, ensuring that the discoveries and breakthroughs it enables can be turned rapidly into benefits to patients. It will expand NHS access to PET services, particularly in underserved areas across the East of England, and support more inclusive trial participation.

Patrick Maxwell, Regius Professor of Physic and Head of the School of Clinical Medicine at the University of Cambridge, said: “The ANGLIA network, centred on the Cambridge Biomedical Campus and with collaborations across the wider University and its partners, will drive innovations in many areas of this key imaging technology, such as new radiopharmaceuticals and application of AI to data analysis, that will bring benefits to patients far beyond its immediate reach. Its expertise will help build the next generation of PET scientists, as well as enabling partners in industry to use PET to speed up the development of new drugs.”

Roland Sinker, Chief Executive of Cambridge University Hospitals NHS Foundation Trust, which runs Addenbrooke’s Hospital, said: “I am pleased that our patients will be some of the first to benefit from this groundbreaking technology. Harnessing the latest technologies and enabling more people to benefit from the latest research is a vital part of our work at CUH and is crucial to the future of the NHS.

“By locating this scanner at Addenbrooke’s we are ensuring that it can be uniquely used to deliver wide ranging scientific advances across academia and industry, as well as improving the lives of patients.”

It is anticipated that the scanner will be installed by autumn 2026.

Professor Franklin Aigbirhio

Enhancing training and research capacity

The co-location of the total-body PET scanner with existing facilities and integration with systems at the University of Cambridge and Addenbrooke’s Hospital will also enhance training and research capacity, particularly for early-career researchers and underrepresented groups.

The ANGLIA network will provide opportunities to support and train more by people from Black and other minority ethnic backgrounds to participate in PET chemistry and imaging. The University of Cambridge will support a dedicated fellowship scheme, capacity and capability training in key areas, and strengthen the network partnership with joint projects and exchange visits.

Professor Aigbirhio, who is also co-chair of the UKRI MRC’s Black in Biomedical Research Advisory Group, added: “Traditionally, scientists from Black and other minority ethnic backgrounds are under-represented in the field of medical imaging. We aim to use our network to change this, providing fellowship opportunities and training targeted at members of these communities.”

The National PET Imaging Platform

Funded by UKRI’s Infrastructure Fund, and delivered by a partnership between Medicines Discovery Catapult, MRC and Innovate UK, NPIP provides a critical clinical infrastructure of scanners, creating a nationwide network for data sharing, discovery and innovation. It allows clinicians, industry and researchers to collaborate on an international scale to accelerate patient diagnosis, treatment and clinical trials. The MRC funding for the Cambridge scanner will support the existing UKRI Infrastructure Fund investment for NPIP and enables the University to establish a total-body PET facility.

Dr Ceri Williams, Executive Director of Challenge-Led Themes at MRC said: “MRC is delighted to augment the funding for NPIP to provide an additional scanner for Cambridge in line with the original recommendations of the funding panel. This additional machine will broaden the geographic reach of the platform, providing better access for patients from East Anglia and the Midlands, and enable research to drive innovation in imaging, detection, and diagnosis, alongside supporting partnership with industry to drive improvements and efficiency for the NHS.”

Dr Juliana Maynard, Director of Operations and Engagement for the National PET Imaging Platform, said: “We are delighted to welcome the University of Cambridge as the latest partner of NPIP, expanding our game-changing national imaging infrastructure to benefit even more researchers, clinicians, industry partners and, importantly, patients.

“Once operational, the scanner will contribute to NPIP’s connected network of data, which will improve diagnosis and aid researchers’ understanding of diseases, unlocking more opportunities for drug discovery and development. By fostering collaboration on this scale, NPIP helps accelerate disease diagnosis, treatment, and clinical trials, ultimately leading to improved outcomes for patients."

A new total-body PET scanner to be hosted in Cambridge – one of only a handful in the country – will transform our ability to diagnose and treat a range of conditions in patients and to carry out cutting-edge research and drug development.

This is an exciting new technology that will transform our ability to answer important questions about how diseases arise and to search for and develop new treatments
Franklin Aigbirhio
Siemens Healthineers Biograph Vision Quadra Total-Body PET Scanner

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

NUS recognised for green building excellence

Demonstrating its strong commitment to decarbonising the built environment through a portfolio of 64 Green Mark-certified buildings, NUS was awarded the Green Mark Commemorative Certificate at the Singapore Green Building Council Gala Dinner on 11 July 2025 for its continued support and contributions to Singapore’s green building journey. The event was held in celebration of the 20th anniversary of the Building and Construction Authority (BCA) Green Mark Certification Scheme.

Leading the way in high-performance green buildings

As of July 2025, six buildings in NUS have achieved the highest energy performance ratings under the Green Mark 2021 In-Operation scheme - Platinum Super Low Energy (SLE), Zero Energy (ZE), and Positive Energy (PE):

  • SDE 4 – Platinum ZE (2018) and Platinum PE (2022)
  • SDE 1 – Platinum SLE (2018) and Platinum ZE (2023)
  • SDE 3 – Platinum SLE (2018, 2024)
  • Ventus – Platinum (2012) and Platinum SLE (2024)
  • AS8 – Platinum (2015) and Platinum SLE (2024)
  • Central Library – Platinum SLE (2020, 2024)

In 2021, NUS bagged the top BCA award for green buildings – the Green Mark Platinum Champion Award – for achieving 50 Green Mark Gold and above certifications for developments across its campuses.

One example is Ventus at NUS’ Kent Ridge Campus, where the Office of University Campus Infrastructure (UCI) is located. Through green building design and collaboration with building users to implement energy-saving measures, the Energy Use Intensity (EUI), which measures the total energy consumption of a building relative to its gross floor area, for Ventus was 49 kWh/m2 in FY2024, which is significantly lower than the average EUI for offices at 219 kWh/m2. 

Beyond achieving high energy performance for the University’s new and existing buildings, green buildings also offer health and well-being benefits to building occupants. Comprehensive research by BCA and NUS in 2017 revealed that occupants experienced significantly better indoor environmental conditions than those in non-certified buildings, with measurable improvements in temperature control, humidity levels, air quality, and overall occupant satisfaction.

Translating innovation into practice

Beyond certification, NUS has played a pivotal role in pushing for an alternative cooling approach to reduce energy consumption while maintaining thermal comfort. Under Singapore’s new proposed Built Environment Decarbonisation Technology Roadmap, Associate Professor Adrian Chong from the Department of the Built Environment at the College of Design and Engineering, together with the UCI sustainability team, contributed to the development of a new technical standard known as TR 141:2025 for hybrid cooling systems, which aims to accelerate the deployment of hybrid cooling systems across building typologies. Their inputs were informed by the design and operational experience gained from implementing hybrid cooling technology in Net Zero Energy buildings at NUS. The new technical standard addresses the current gap in international standards and provides guidelines suited to our tropical climate.

Under the Campus Sustainability Roadmap 2030 Framework, the University aims to reduce Scope 1 and Scope 2 emissions by 30 per cent and reduce EUI by 20 per cent from the FY2019 baseline. By setting ambitious energy efficiency targets for its new and existing buildings, NUS aims to use the campus as a living lab where students can engage with real-world case studies for experiential learning and researchers can test-bed and develop innovations to improve building performance.

Supporting Singapore’s green building journey and Net Zero vision

NUS is committed to reducing both operational and embodied carbon across its building portfolio, in alignment with the national green building agenda and broader climate goals. As part of its strategy to address Scope 3 emissions, NUS has commissioned a study on sustainable construction, specifically on low embodied carbon materials, and is currently exploring the procurement of such materials for future NUS developments.

Vice President (Campus Infrastructure) Mr Koh Yan Leng said, “Greening the university’s buildings, such as by providing ample greenery, natural daylight, and more natural ventilation, has created a conducive environment for learning and working. Our commitment to bold energy targets helps shape the behaviours of future leaders and enables our community to engage with sustainable practices every day.”

By NUS University Campus Infrastructure

Beating the odds: How Singapore’s governance transformed its destiny

Days before Singapore celebrated its 60th year of independence, a new book was added to the canon of literature on the city-state’s improbable growth and survival – this time, through the eyes of public sector leaders who had a hand in its transformation.

Launched on 6 August 2025, How Singapore Beat the Odds: Insider Insights on Governance in the City-State by Terence Ho, Adjunct Associate Professor (Practice) at the Lee Kuan Yew School of Public Policy (LKYSPP), features in-depth interviews with 12 political and public sector leaders on the challenges faced, chances taken and decisions made that shaped Singapore into what it is today, as well as their perspectives on the journey and views on Singapore’s future.

Assoc Prof Ho is a frequent commentator on economic, fiscal, workforce and social policy issues, as well as skills and lifelong learning. As an adjunct faculty member of LKYSPP, he teaches in executive programmes covering various areas of public sector governance.

The foreword was written by Mr Tharman Shanmugaratnam, President of the Republic of Singapore, and the distinguished interviewees include former Singapore President Mdm Halimah Yacob; Ms Seah Jiak Choo, former Director-General of Education; Mr Peter Ho, former Chairman of the Urban Redevelopment Authority (URA); and Mr Ravi Menon, former Managing Director of the Monetary Authority of Singapore.

The project was mooted and sponsored by Mr Narayana Murthy, founder and Chairman Emeritus of technology provider Infosys and a long-time admirer of Singapore. He believed that such a book of insights into Singapore’s public governance would inspire and guide leaders of developing nations to drive similar transformations of their own.

“The success of Singapore lies not in any magic, but in a successful method,” said Mr Murthy at the launch event. “Singapore demonstrates that good public governance is a possible, practical, valuable and powerful strategy. To Singapore and to the visionaries who made it what it is today, I say ‘Thank you’ for lifting the standard, showing the way and reminding the world that public governance done right can transform the destiny of a nation.”

Noting the unplanned coincidence of the book’s launch with the nation’s diamond jubilee, Assoc Prof Ho highlighted that the themes explored in the book were not just critical in Singapore’s early development but remain relevant for its next 60 years. “While the book aims to inspire other countries, I hope that Singaporeans too will draw inspiration from the values and the pioneering spirit of our veteran leaders as we collectively continue to write the next chapter of the Singapore story.”

The book launch at the Capitol Kempinski Hotel Singapore was attended by President Tharman as Guest of Honour and more than 120 guests, including several of the book’s interviewees. During the event, three leaders featured in the book delivered speeches on the aspects of governance where they made significant contributions: Professor Lim Siong Guan, former Head of Civil Service, on fiscal management; Mr Khaw Boon Wan, former Singapore Cabinet Minister, on healthcare; and Professor Cheong Koon Hean, former CEO of URA and the Housing Development Board, on urban planning.

Priorities to not just survive, but thrive

The idea that no one owes Singapore its survival was a driving force for many early governance decisions, said Prof Lim, who is also Distinguished Practitioner Fellow at LKYSPP and was recently honoured as an Emeritus Professor by NUS for his noteworthy contributions to Singapore’s civil service.

In his speech, Prof Lim shared the government’s top spending priorities for nation-building: defence, industrialisation, home ownership, education and health. While principles of fiscal prudence were established from the start, the government recognised the need to enhance its revenue streams and created the sovereign wealth fund GIC and the reserves policy to generate investment returns and allocate them between current and future needs.

Prof Lim described Singapore’s approach thus: “If you want to spend more, you need to be able to earn more at the same time. Be fair; give access to the current generation to a reasonable or fair part of the investment returns, while at the same time making sure that you keep up the capital reserves for whatever may be the unexpected demands of the rainy days.”

Solving the issue of sustainable financing for the healthcare system was Mr Khaw’s task as Health Minister from 2004 to 2011. Balancing the two dimensions of healthcare, financing and delivery, is especially challenging because “in healthcare, demand is potentially unlimited, but supply is not,” and poor patients suffer the most when demand outstrips supply, said the current chairman of SPH Media Trust.

The solution lay in a mixed approach for both dimensions. Instead of relying on taxation or insurance for funding, Singapore employs a multi-layered model that includes patient co-payment to prevent overconsumption while keeping healthcare affordable. Similarly, healthcare services are delivered by a mix of public, private and charitable providers, which offers patients more choices and responds better to the diverse needs of an ageing population.

As Singapore evolves into a “super aged” society, the healthcare system must focus more on long-term chronic care, building on the pillars of preventative healthcare, primary healthcare by family physicians and a healthcare system that is integrated with the community to provide holistic care, said Mr Khaw.

He spotlighted the crucial role of caregivers, saying: “When a cure is no longer available, what patients value most is care, and the highest form of care is underpinned by love and compassion, often by the patient’s family. Society has a strong interest, indeed a strong duty, to support caregivers adequately.”

A combination of aspirations, innovation, and good urban governance enabled Singapore to turn the constraints that threatened its survival into catalysts for success, said Prof Cheong, Practice Professor at Singapore University of Technology and Design and Chairman of the Lee Kuan Yew Centre for Innovative Cities.

For example, water insecurity drove investments in water recycling and water catchments, and the scarcity of land forced planners to reclaim land, create underground spaces and plan vertically.

“The starting point is to embed a culture of foresight in the way our policies are formulated. The city did not wait for problems to become crises,” she said, adding that a whole-of-government approach and collaboration with the private sector, non-profits and citizens were key in creating solutions that balance economic competitiveness and quality of life.

Considering principles in context

The event concluded with a brief panel discussion moderated by Ms Ong Toon Hui, Vice Dean and Executive Director, Institute for Governance and Leadership at LKYSPP, in which the three speakers shared their thoughts on how the book’s insights into Singapore’s experience could be made relevant to other countries.

Prof Lim and Mr Khaw emphasised the importance of understanding the principles behind the decisions and translating those into actions that fit the context of each country, and Prof Cheong provided additional insights from her experience researching cities for 20 years and working on the Lee Kuan Yew World City Prize.

She cited long-term planning, good urban governance, strong leadership, institutionalised processes, and competent people as common traits of the award-winning cities, some of which have been integrated into the prize assessment criteria.

“The cities that have won the prize all exhibit very similar principles to us – maybe not so much in the details – but these are very high-level principles that I feel are applicable to most cities,” she said, reminding the audience that even as others look to Singapore for lessons in governance, there is much that Singapore can learn from other cities in return.

Setback in the fight against pediatric HIV

Nation & World

Setback in the fight against pediatric HIV

Funding cut disrupts Botswana-based effort to help patients control illness without regular treatments

Liz Mineo

Harvard Staff Writer

4 min read
Roger Shapiro.

Roger Shapiro.

Niles Singer/Harvard Staff Photographer

For more than 20 years, Harvard infectious disease specialist Roger Shapiro has fought HIV on the ground in Botswana, where the rate of infection exceeded 30 percent in some areas of the country in the 1990s.

Progress has been steady since then. According to the World Bank, Botswana still has one of the world’s highest rates of infection — over 20 percent of the adult population — but far fewer HIV deaths. The main lifesaver has been antiretroviral treatment (ART).

Shapiro began working in Botswana in 1999 under the mentorship of pioneering AIDS researcher Max Essex, who helped launch the Botswana Harvard Health Partnership (BHP). He has run dozens of studies on HIV/AIDS in Botswana and has become an expert in how HIV affects maternal and child health.

Max Essex.

In 2008, pioneering AIDS researcher Professor Max Essex spoke to a group gathered at his lab in Gaborone, Botswana.

Harvard file photos

Sign identifying Max Essex Program in Gaborone, Botswana.
On the grounds of Princess Marina Hospital in Gaborone, a plaque recognizes the partnership between Harvard School of Public Health and the Botswana Ministry of Health.

Among Shapiro’s current studies is a trial with the potential to help some children control HIV without the need for regular treatment. Efforts to create a vaccine have so far failed, but there are exciting new developments with products known as broadly neutralizing antibodies, or bNAbs, he says.

The trial aims to find a new treatment option by examining the effects of a combination of three broadly neutralizing HIV antibodies. It builds upon previous studies suggesting that bNAbs might help the immune system clear the virus better than standard ART, and may offer a promising avenue for getting to post-treatment viral control, Shapiro says.

“It is the only study in pediatrics looking at three antibodies as combination treatment for HIV and ultimately as a path toward HIV cure,” he said. “It’s really exciting science, since we are testing whether some children can go off all treatment and control HIV on their own.”

“Botswana probably has the best program to prevent HIV transmission to children on the continent.”

Roger Shapiro

In May, the five-year grant supporting the study was slashed as part of the Trump administration’s mass cancellation of Harvard research funds. Four other grants for Botswana-based projects led by Shapiro were also canceled. The cuts have not only dealt a serious blow to the participants in the trial and their families, said Shapiro, but imperiled progress toward a cure for pediatric HIV.

“This was one of the largest funded studies to begin making inroads in this field,” he said. “Now all this science is up in the air.”

Funded by National Institutes of Health and the National Institute of Allergy and Infectious Diseases, the trial is following 12 children, ages 2-9 years, who are living with HIV. The study is in its second year, and researchers have been gearing up to have the children pause standard ART and start using antibodies alone as treatment.

The team had planned to scale up to 41 children, but due to the cuts, they are now aiming for 30. They were able to secure donations to continue with the project until March, but it’s unclear what will happen after that.

According to the Centers for Disease Control, Botswana is a leader in global HIV efforts, having exceeded the UNAIDS 95-95-95 targets: “95 percent of people living with HIV in Botswana know their status, 98 percent of people who know their status receive treatment, and 98 percent of people on treatment are virally suppressed.”

“Botswana probably has the best program to prevent HIV transmission to children on the continent,” said Shapiro. “Now less than half a percent of the children become infected because most women access free drug treatment during pregnancy, which effectively turns off transmission. It’s a tiny percentage, but it still leads to more pediatric HIV infections than we see in the United States.”

Giving treatment to children infected with HIV every day for the rest of their lives is a daunting prospect for many families, said Shapiro. Parents were excited about the possibility of their children being liberated from regular infusions of antibodies.

The grant’s termination is yet another blow to Botswana’s fight against HIV/AIDS. In February, assistance through three U.S. programs — USAID, the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR), and the Centers for Disease Control and Prevention — was cut. Botswana’s government pays for medication, but it relied on those funds to provide services around HIV, said Shapiro.

“HIV/AIDS is essentially a chronic problem in Botswana, and a chronic problem needs ongoing treatment,” he said. “If treatment lapses … We worry about HIV transmission going back up again, not only in Botswana but throughout all of Africa.”

Why was Pacific Northwest home to so many serial killers?

PNW
Nation & World

Why was Pacific Northwest home to so many serial killers?

In ‘Murderland,’ alum explores lead-crime theory through lens of her own memories growing up there

Jacob Sweet

Harvard Staff Writer

5 min read

In Caroline Fraser’s 2025 book “Murderland,” the air is always thick with smog, and sinister beings lie around every corner.

murderland book cover by caroline fraser

Fraser, Ph.D. ’87, in her first book since “Prairie Fires,” her Pulitzer Prize-winning biography of “Little House on the Prairie” author Laura Ingalls Wilder, explores the proliferation of serial killers in the 1970s — weaving together ecological and social history, memoir, and disturbing scenes of predation and violence. The resulting narrative shifts the conventional focus on the psychology of serial killers to the environment around them. As the Pacific Northwest reels from a slew of serial murderers, Fraser turns toward the nearby smelters that shoot plumes of lead, arsenic, and cadmium into the air and the companies, government officials, and even citizens who are happy to overlook the pollution.

Of the Pacific Northwest’s most notorious killers, Fraser ties many to these smokestacks. Ted Bundy, whose crimes and background are discussed more than any other character, grew up in the shadows of the ASARCO copper smelter in Tacoma, Washington. Gary Ridgway grew up in Tacoma, too, and Charles Manson spent 10 years at a nearby prison, where lead has seeped into the soil. Richard Ramirez, known as the Night Stalker, grew up next to a different ASARCO smokestack in El Paso, Texas, long before committing murders in Los Angeles.

Fraser’s own experiences growing up in Mercer Island, Washington, add another eerie dimension. A classmate’s father blows up his home with the family inside. Another classmate becomes a serial killer. Her Christian Scientist father is menacing and abusive, and Fraser, as a child, considers ways to get rid of him, possibly by pushing him off a boat. The darkness is unrelenting; something is in the air.

To what extent environmental degradation directly led to the killings described in the book, Fraser leaves up to readers. “There are many things that probably contribute to somebody who commits these kinds of crimes,” she said in an interview. “I did not conceive of it as a work of criminology or an academic treatise on the lead-crime hypothesis. I really just wanted to tell a history about the history of the area — what I remember of it — and create a narrative that took all these things into account.”

“I did not conceive of it as a work of criminology or an academic treatise on the lead-crime hypothesis. I really just wanted to tell a history about the history of the area — what I remember of it — and create a narrative that took all these things into account.”

Fraser has been thinking about these ideas for decades. Before “Prairie Fires” was published, she had already written some of the memoir portions of the book, recalling the crimes and unusual occurrences near her family’s home. She was long interested in why there were so many serial killers in the Pacific Northwest and whether the answer was simply happenstance.

Though she had some knowledge of the pollution in Tacoma as a kid — the area’s smell was referred to as the “Aroma of Tacoma” due to sulfur emissions from a local factory — it wasn’t until decades later that she learned the full scope of industrial production and pollution.

Some revelations came by chance. When looking at one property on Vashon Island, across the Puget Sound from West Seattle, she came across a listing with the ominous warning — “arsenic remediation needed.”

“That just leapt out at me,” she said. “How can there be arsenic on Vashon Island?” After more research, she discovered that arsenic had come from the ASARCO smelter, on the south end of the same body of water. The damage reached much farther; the Washington State Department of Ecology says that air pollutants — mostly arsenic and lead — from the smelter settled on the surface soil of more than 1,000 square miles of the Puget Sound Basin.

“Much of Tacoma, with a population approaching 150,000, will record high lead levels in neighborhood soils,” Fraser wrote in the book, “but the Bundy family lives near a string of astonishingly high measurements of 280, 340, and 620 parts per million.”

The connection made Fraser focus more on the physical environment in which these serial killers lived and less on other factors — like a history of abuse — on which true-crime writers have historically placed greater emphasis.

In this ecological pursuit, Fraser points readers toward once-ubiquitous sources of pollution like leaded gas and the industry forces that popularized them against advice from public-health experts.

American physicians raise concerns that lead particulates will blanket the nation’s roads and highways, poisoning neighborhoods slowly and “insidiously.” They call it “the greatest single question in the field of public health that has ever faced the American public.” Their concerns are swept aside, however, and Frank Howard, a vice president of the Ethyl Corporation, a joint venture between General Motors and Standard Oil, calls leaded gasoline a “gift of God.”

Though Fraser doesn’t explicitly support the lead-crime hypothesis, the core of the idea — that greater exposure to lead results in higher rates of crime — remains central. In the book’s final chapter, Fraser cites the work of economist Jessica Wolpaw Reyes, Ph.D. ’02, who concluded in her dissertation that lead exposure correlates with higher adult crime rates.

Regardless of exactly how much this hypothesis can be assuredly proven, Fraser thinks the connections between unapologetic and unfettered pollution and violent crime warrant scrutiny. In “Murderland,” she gives the idea, and an era of crime, a nimble, haunting narrative.

A new model predicts how molecules will dissolve in different solvents

Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.

The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.

“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.

The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.

“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”

William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.

Solving solubility

The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.

In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.

That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.

“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.

Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.

Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.

One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.

The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.

The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.

“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.

Accurate predictions

The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.

“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”

The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.

“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.

Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.

“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”

The research was funded, in part, by the U.S. Department of Energy.

© Image: Courtesy of the researchers; MIT News

MIT chemical engineers created a computational model that can predict how well a given molecule will dissolve in an organic solvent.

Singapore’s strong research base makes it easier for it to adopt nuclear energy: UN nuclear chief

Singapore is seen as an ideal country that could use nuclear energy, due to its technological know-how, institutional maturity, and geographical constraints on renewable power generation.  

This view came from Mr Rafael Mariano Grossi, Director General of the International Atomic Energy Agency (IAEA), who delivered a lecture on developments in atomic energy and nuclear security at NUS on 25 July 2025. The lecture, hosted by the Singapore Nuclear Research and Safety Institute (SNRSI) at NUS, was followed by a question-and-answer session moderated by Associate Professor Leong Ching, NUS Vice Provost (Student Life) and Acting Dean of the Lee Kuan Yew School of Public Policy at NUS.

“When it comes to decarbonising, what are your options? Here, there is no hydropower. You have renewables, but you don’t have much territory... so you cannot have wind parks for kilometres on end,” he told an audience of 250 students, undergraduates, academics, experts and government officials at NUS’ Shaw Foundation Alumni House.

“In my opinion, and in the opinion of many experts, in terms of the options, perhaps Singapore could rightly figure as the most perfect example of a country that needs nuclear energy. With a very small nuclear power plant, you can have a level of energy density and production that you cannot match with anything else.”

Singapore has been contributing to global developments in nuclear research for the last decade and has been a member of IAEA since 1967.

For example, Singapore conducts training on nuclear science and technology for other countries, and its government agencies contribute to technical committees of the IAEA — which functions as the United Nations nuclear watchdog.

While the government announced in February that Singapore would study the potential deployment of nuclear power and systematically build up capabilities in the area, no decision has been made on whether the country will adopt nuclear energy.

Meanwhile, the country is further building its nuclear expertise with SNRSI, which was launched on 11 July 2025.

“Having Mr Grossi here today is of special significance to us because, earlier this month, the Singapore Nuclear Research and Safety Institute was officially launched after 10 years as an Initiative. It didn’t happen overnight, it took us 10 years to grow our size and capability,” said Professor Lui Pao Chuen, Chairman of the SNRSI Management Board, in his opening remarks.

“Singapore is committed and ready to contribute to the safe, peaceful use of nuclear science and technology,” he added.

SNRSI will operate from a new purpose-built facility at NUS, with a S$66 million grant. It plans to double its pool of experts to 100 by 2030, and will play a major role in Singapore’s partnership with the IAEA to train experts from developing countries in nuclear research.

Singapore has also worked with the IAEA on other areas, such as the designation of the NUS Centre for Ion Beam Applications as a Collaborating Centre — the first such centre in Singapore. One focus of the Collaborating Centre is proton beam therapy, which is used in radiation cancer treatment.

What’s driving the nuclear pivot

The pivot towards atomic power has been a global trend, with ASEAN countries also showing interest in collaborating with IAEA to develop nuclear energy capabilities. For instance, countries like Indonesia, Vietnam and Myanmar plan to build nuclear power plants.

“It is not a trend towards nuclear dominance,” noted Mr Grossi, emphasising that the role of nuclear energy is to provide a stable base load for the grid that “never stops”.

Instead, he attributed the rising interest to two factors: decarbonisation and energy security.

If the world aspired to hit decarbonisation targets of the Paris Agreement, it would have to include the use of nuclear energy. “Decarbonisation without nuclear (energy) is practically utopian,” he added. 

A more fragmented world also means growing energy security concerns, as an overreliance on external energy supplies could put countries in a vulnerable position.

“The allure of a source of energy which gives you total independence becomes even clearer when you have a nuclear power plant. You switch it on, it’s yours,” said Mr Grossi.

If Singapore is keen to pursue nuclear energy, he suggested that the Republic could share a nuclear power plant with an ASEAN neighbour, citing how this is done in Slovenia with the Krsko power plant that provides energy to both Slovenia and Croatia.

Safety is top priority

While there are benefits to nuclear energy, the topic of safety, among other areas such as application guidelines and costs, was brought up and addressed by Mr Grossi during the question-and-answer session.

In response to a question on the effects of a nuclear power plant incident in a densely populated city such as Singapore, Mr Grossi assured that safety is IAEA’s top priority. “We at the IAEA develop, together with the countries, emergency preparedness and response mechanisms,” he said. “It’s one indispensable part of nuclear power planning and operation.”

To another question on disposing radioactive waste, Mr Grossi said that no country can run a nuclear power programme without first having a clearly defined and agreed plan for managing the waste – a process he emphasised that must be considered from the start and not left until after the fuel is used. “There are very clear methodologies to deal with it, and they are used…to great success.”

While there have been debates concerning the long-term disposal due to the lasting radiation of spent fuel, Mr Grossi noted that the amount of such waste is extremely low. The waste generated is also inspected to ensure there is no radiation hazard or misuse of nuclear material. “We are the only industry that checks the rubbish,” he added with a smile.

Summing up the potential of nuclear power, he concluded, “Let me say that all of this presents a picture of opportunities, challenges and problems.” It is a blueprint that the IAEA, in partnership with countries like Singapore, will continue to fine-tune and optimise.

Researchers glimpse the inner workings of protein language models

Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.

These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.

In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”

Onkar Gujral, an MIT graduate student, is the lead author of the open-access study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student in electrical engineering and computer science, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.

Opening the black box

In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.

Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.

In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.

However, in all of these studies, it has been impossible to know how the models were making their predictions.

“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.

In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.

The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.

Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.

When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.

“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”

Interpretable models

Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.

By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”

This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.

“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.

Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.

“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.

The research was funded by the National Institutes of Health. 

© Image: MIT News; iStock

Understanding what is happening inside the “black box” of large protein models could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

For some, the heart attack is just the beginning 

Health

For some, the heart attack is just the beginning 

Woman walking on pulse trace shaped zigzag line.

Getty Images

Sy Boles

Harvard Staff Writer

6 min read

Harvard clinic uses mindfulness techniques to treat medically induced PTSD

Heart attacks are life-changing events, but one type can be particularly distressing. 

Spontaneous coronary artery dissection primarily strikes women under 50. Often, they are physically fit nonsmokers with good cholesterol and normal blood pressure — in other words, the very people who least expect a cardiac emergency. The shock of such an event may help explain why as many as 30 percent of survivors develop symptoms of medically induced post-traumatic stress disorder.

“Medically induced PTSD is basically PTSD that results from a sudden, catastrophic, life-threatening medical condition,” said Christina Luberto, a clinical health psychologist in the Department of Psychiatry at Mass General Hospital/Harvard Medical School. “It actually accounts for about 7 percent of all PTSD cases.” 

Luberto is the founding director of the Mindful Living Center, a mental health service embedded with the Mass General Women’s Heart Health Program. The Mindful Living Center is one of the few programs in the country to integrate psychological services directly into cardiovascular care for women. 

Christina Luberto.

Christina Luberto.

Stephanie Mitchell/Harvard Staff Photographer

“We treat survivors whose primary presenting problem is the fear of recurrence,” she said. “They’re terrified by the uncertainty and possibility that it is going to happen again.”

Despite its prevalence, medically induced PTSD wasn’t formally recognized until the 1990s, when the Diagnostic and Statistical Manual of Mental Disorders expanded the definition to include trauma from medical events. It later tightened the criteria to sudden conditions, excluding chronic conditions like cancer or HIV. Research has shown that patients with medically induced PTSD tend to have worse recoveries and a higher risk of death than those without.

Medically induced PTSD symptoms mirror the symptoms of PTSD from external traumas, Luberto said: intrusive memories, hyperarousal, negative changes in mood or belief, and avoidance. But there are key differences. 

“People often think of PTSD that results from external events like serving in combat. People may have flashbacks and intrusive memories. They’re thinking about what happened in the past. They might avoid things like celebrations with fireworks and loud noises, friends from that time, and they’re sort of able to do that,” she said. “With medically induced PTSD, the threat is not left in the past. You can’t escape the source of the ongoing threat, because the source of the threat is your own body.” 

That reality makes survivors hyper-aware of physical sensations. Sweat or an elevated heart rate can trigger panic. Because exercise can mimic the sensations patients experienced during their heart attack, they may avoid working out — paradoxically, the very thing that could aid recovery and prevent future events. Others may skip medication, avoid medical follow-ups, or, conversely, over-engage with the healthcare system, frequently calling or messaging their providers. 

“It’s a vicious cycle. What I hear is the future-oriented worry: ‘Is this going to happen again?’” 

Christina Luberto

“There’s what we call cognitive reactivity in response to physical symptoms. ‘Why am I sweating? Why is my heart beating? Maybe it’s the coffee, but maybe it’s not. Should I go to the hospital?’ And then all of this thinking creates more physical symptoms of anxiety,” Luberto said. “It’s a vicious cycle. What I hear is the future-oriented worry: ‘Is this going to happen again?’”

Her research shows how the distressing thoughts can escalate. “Survivors start to believe different things about their body, and on some level, about the world. They believe, you know, ‘My body betrayed me. This is going to happen again. I’m not safe.’”

The Mindful Living Center, which opened in October 2023, employs an adapted Mindfulness-Based Cognitive Therapy method based on Luberto’s prior NIH-funded research. In online group therapy sessions, patients confront the source of their distress: their bodies. 

“Mindfulness meditation brings you into the body, noticing the body without judgment, feeling sensations, noticing where the body can still feel safe or can still feel comfortable, and being able to regulate your attention to move it out of the body if the anxiety gets too much.” 

The results are encouraging. Since it opened, the Mindful Living Center has received 181 referrals and treated 86 patients. Ninety percent of patients in the Mindfulness-Based Cognitive Therapy sessions reported improved emotional health, and 75 percent reported improved cardiac health.

“Stress and anxiety can have significant negative consequences for patients, from how they experience medical care to their ability to empower themselves to take steps to reduce future events,” said Amy Sarma, Cathy E. Minehan Endowed Chair in Cardiology at MGH and an assistant professor of medicine at Harvard Medical School. “However, most cardiologists do not have access to the resources to help their patients as we do at Mass General Brigham. Our partnership with Dr. Luberto in this unique program enables us to significantly advance the care of our patients.”

Nandita Scott, Ellertson Family Endowed Chair in Cardiovascular Medicine and the director of the Women’s Heart Health Program, highlighted the “exceptional support” the mindfulness program has received from the cardiology leadership at Mass General Brigham. “It’s well-established that mental health and cardiovascular outcomes are closely linked, yet few divisions would have had the vision or resources to fund such an initiative,” she said.

Luberto, who is also an executive faculty member in the MGH Health Promotion Resiliency Intervention Research Center and the MGH Benson-Henry Institute for Mind-Body Medicine, hopes to increase the Mindful Living Center’s offerings to other research-backed methodologies for managing medically induced PTSD. In a recent study led by UCLA doctoral student Corinne Meinhausen, with Luberto serving as a co-author, researchers reviewed therapies ranging from traditional cognitive behavioral therapy to written exposure therapy, a short five-session program in which patients write detailed accounts of the traumatic event. The written exposure therapy’s lower dropout rates and strong earlier results make it an appealing option, especially for patients reluctant to commit to longer, more intensive therapies.

Luberto said doctors can be on the lookout for PTSD symptoms resulting from traumatic medical events. The American Heart Health Association recommends screening for depression; she suggests adding PTSD screening for spontaneous coronary artery dissection patients, along with a clear treatment pathway. There is little research on risk factors or prevention of medically induced PTSD, but compassionate care during hospitalization couldn’t hurt, she said. 

“There are trauma-informed care principles in mental healthcare in general that include giving patients choice. Being transparent. Considering cultural and identity factors. It’s an important research question to see if that can prevent risk, but even if it can’t, it’s just good care.”

A shape-changing antenna for more versatile sensing and communication

MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.

A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.

The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.

The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices, motion tracking and sensing for augmented reality, or wireless communication across a wide range of network protocols.

In addition, the researchers developed an editing tool so users can generate customized metamaterial antennas, which can be fabricated using a laser cutter.

“Usually, when we think of antennas, we think of static antennas — they are fabricated to have specific properties and that is it. However, by using auxetic metamaterials, which can deform into three different geometric states, we can seamlessly change the properties of the antenna by changing its geometry, without fabricating a new structure. In addition, we can use changes in the antenna’s radio frequency properties, due to changes in the metamaterial geometry, as a new method of sensing for interaction design,” says lead author Marwa AlAlawi, a mechanical engineering graduate student at MIT.

Her co-authors include Regina Zheng and Katherine Yan, both MIT undergraduate students; Ticha Sethapakdi, an MIT graduate student in electrical engineering and computer science; Soo Yeon Ahn of the Gwangju Institute of Science and Technology in Korea; and co-senior authors Junyi Zhu, assistant professor at the University of Michigan; and Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the Computer Science and Artificial Intelligence Lab. The research will be presented at the ACM Symposium on User Interface Software and Technology.

Making sense of antennas

While traditional antennas radiate and receive radio signals, in this work, the researchers looked at how the devices can act as sensors. The team’s goal was to develop a mechanical element that can also be used as an antenna for sensing.

To do this, they leveraged the antenna’s “resonance frequency,” which is the frequency at which the antenna is most efficient.

An antenna’s resonance frequency will shift due to changes in its shape. (Think about extending the left “bunny ear” to reduce TV static.) Researchers can capture these shifts for sensing. For instance, a reconfigurable antenna could be used in this way to detect the expansion of a person’s chest, to monitor their respiration.

To design a versatile reconfigurable antenna, the researchers used metamaterials. These engineered materials, which can be programmed to adopt different shapes, are composed of a periodic arrangement of unit cells that can be rotated, compressed, stretched, or bent.

By deforming the metamaterial structure, one can shift the antenna’s resonance frequency.

“In order to trigger changes in resonance frequency, we either need to change the antenna’s effective length or introduce slits and holes into it. Metamaterials allow us to get those different states from only one structure,” AlAlawi says.

The device, dubbed the meta-antenna, is composed of a dielectric layer of material sandwiched between two conductive layers.

To fabricate a meta-antenna, the researchers cut the dielectric laser out of a rubber sheet with a laser cutter. Then they added a patch on top of the dielectric layer using conductive spray paint, creating a resonating “patch antenna.”

But they found that even the most flexible conductive material couldn’t withstand the amount of deformation the antenna would experience.

“We did a lot of trial and error to determine that, if we coat the structure with flexible acrylic paint, it protects the hinges so they don’t break prematurely,” AlAlawi explains.

A means for makers

With the fabrication problem solved, the researchers built a tool that enables users to design and produce metamaterial antennas for specific applications.

The user can define the size of the antenna patch, choose a thickness for the dielectric layer, and set the length to width ratio of the metamaterial unit cells. Then the system automatically simulates the antenna’s resonance frequency range.

“The beauty of metamaterials is that, because it is an interconnected system of linkages, the geometric structure allows us to reduce the complexity of a mechanical system,” AlAlawi says.

Using the design tool, the researchers incorporated meta-antennas into several smart devices, including a curtain that dynamically adjusts household lighting and headphones that seamlessly transition between noise-cancelling and transparent modes.

For the smart headphone, for instance, when the meta-antenna expands and bends, it shifts the resonance frequency by 2.6 percent, which switches the headphone mode. The team’s experiments also showed that meta-antenna structures are durable enough to withstand more than 10,000 compressions.

Because the antenna patch can be patterned onto any surface, it could be used with more complex structures. For instance, the antenna could be incorporated into smart textiles that perform noninvasive biomedical sensing or temperature monitoring.

In the future, the researchers want to design three-dimensional meta-antennas for a wider range of applications. They also want to add more functions to the design tool, improve the durability and flexibility of the metamaterial structure, experiment with different symmetric metamaterial patterns, and streamline some manual fabrication steps.

This research was funded, in part, by the Bahrain Crown Prince International Scholarship and the Gwangju Institute of Science and Technology.

© Credit: Courtesy of the researchers

A meta-antenna (shiny latticed material) could be incorporated into a curtain that dynamically adjusts household lighting. Here, a prototype is seen retracted (top left), expanded (bottom), and next to the latching mechanism (top right).

UM-NUS Celebrate Longstanding Friendship with Royal Banquet, 54th Golf Tournament and Precision Health Symposium in Malaysia

The National University of Singapore (NUS) and Universiti Malaya (UM) will celebrate more than six decades of collaboration with a series of commemorative events in Perak, Malaysia, later this month. Mr Tharman Shanmugaratnam, President of the Republic of Singapore and Chancellor of NUS, accompanied by an NUS delegation, will travel to Ipoh on 25 August 2025 for a working visit. President Tharman will be accompanied by his spouse, Mrs Jane Ittogi Shanmugaratnam.

The NUS delegation will include Chairman of the NUS Board of Trustees Mr Hsieh Fu Hua, NUS President Professor Tan Eng Chye, NUS Chief Alumni Officer Ms Ovidia Lim-Rajaram, NUS Golf Captain Ms Angelia Tay, academics, golfers and staff.

NUS marks its 120th anniversary in 2025, a legacy that began in 1905 with the establishment of the Straits Settlements and Federated Malay States Government Medical School, which later became Universiti Malaya. In 1962, UM’s Singapore campus was reconstituted as the University of Singapore, which later evolved into the National University of Singapore. The strong ties between the two institutions have continued alongside the warm friendship between Singapore and Malaysia.

“The longstanding friendship between NUS and UM is rooted in a shared history and a mutual commitment to collaboration and educational excellence. As both universities mark their 120-year legacy this year, we are honoured to celebrate this enduring partnership through meaningful engagements which reflect not only our deep institutional ties, but also the warm and longstanding bonds between Singapore and Malaysia,” said Professor Tan Eng Chye, President of NUS. 

Meeting and Royal Welcome in Kuala Kangsar

On the evening of 25 August, President Tharman will have a meeting with the Chancellor of UM and Sultan of Perak His Royal Highness Sultan Nazrin Muizzuddin Shah Ibni Almarhum Sultan Azlan Muhibbuddin Shah Al-Maghfur-Lah at the Istana Iskandariah in Kuala Kangsar. This will be followed by a Royal Dinner hosted by HRH Sultan Nazrin Shah and the Raja Permaisuri of Perak, Her Royal Highness Tuanku Zara Salim, for President Tharman, Mrs Tharman and the NUS delegation.

UM–NUS Inter-University Tunku Chancellor Golf Tournament

On 26 August, more than 100 golfers from NUS and UM will tee off at the Royal Perak Golf Club in the 54th UM–NUS Inter-University Tunku Chancellor Golf Tournament. First held in 1968 to strengthen ties between the two institutions, the tournament alternates venues between Malaysia and Singapore. This year’s competition in Ipoh will culminate in a prize presentation lunch, attended by the two Chancellors, Mr Hsieh Fu Hua, Chairman of the NUS Board of Trustees, Tan Sri Zarinah Anwar, Chairman of UM’s Board of Directors, Professor Tan Eng Chye, President of NUS, and Professor Dato’ Seri Ir Dr Noor Azuan Abu Osman, Vice-Chancellor of UM.

2nd UM–NUS Joint Academic Symposium: Precision Health

Also on 26 August, NUS and UM researchers will convene for the second Joint Academic Symposium held alongside the tournament, this year focusing on precision health. The symposium will feature 10 keynote addresses and technical talks by leading scholars and clinicians from both universities, alongside a special presentation on the shared history of Singapore and Malaysia. President Tharman and HRH Sultan Nazrin Shah will preside over the keynote and shared history presentations. The 2024 joint academic symposium, which was held at NUS, was on biomedical engineering.

Reading like it’s 1989

Arts & Culture

Reading like it’s 1989

Illustration of grade-school children reading at desks.

Illustration by Liz Zonarich/Harvard Staff

Max Larkin

Harvard Staff Writer

7 min read

Report on classroom literature shows staying power for ‘Gatsby,’ ‘Of Mice and Men,’ other classics. Time to move on?

Look back 40 years and you’ll see a lot of seismic change. The rise of the Internet, the smartphone revolution, and now AI everywhere. The end of the Cold War and the dawn of many messier conflicts. The overturning of paradigms of gender and sexuality, and then the backlash.

What are young people reading to help them make sense of their world? According to a recent report, pretty much the same things their parents read.

That report — compiled by researchers Kyungae Chae and Ricki Ginsberg for the National Council of Teachers of English — queried more than 4,000 public school teachers in the U.S. about what they assign students in grades six through 12.

It found little movement at the top of the English curriculum. F. Scott Fitzgerald’s “The Great Gatsby,” John Steinbeck’s “Of Mice and Men,” and a few Shakespeare tragedies occupy half of the top 10 most-assigned spots — just as they did in 1989. Even back in 1964, the top 10 was remarkably similar: If two Dickens novels have been dropped, “Hamlet” and “Macbeth” have not.

table visualization

Classics are “classic” for a reason, of course. But that English-class inertia coincides with a trend that troubles educators, authors, and many parents: a long-term slide in the habit of reading among young Americans.

Some worry that — in a diverse and polarized nation — books that once felt accessible now feel remote or impenetrable, or that cultural conservatism or education bureaucracies have kept the curriculum from a healthy evolution.

With their many avid readers, Harvard’s classrooms contain almost as many views of the problem, if it is one, of curricular stagnation.

Stephanie Burt, the Donald P. and Katherine B. Loker Professor in the Department of English, made headlines last year as she launched a course on Taylor Swift. It was, in part, a self-conscious bid to use the world’s most popular songwriter as a gateway drug to Wordsworth and hermeneutics.

But Burt — also a working poet — said that her embrace of Swift is no sign that she has moved beyond, say, John Donne. To teach Shakespeare to young people, she said, is “not conservatism — it’s conservation, like the protection of old-growth forests.”

Rosette Cirillo, too, sees pedagogical value in true classics from the top of the English-language pantheon — though for a different reason.

Today, Cirillo is a lecturer and a teacher educator at the Harvard Graduate School of Education. But not so long ago, she was teaching eighth-grade English in Chelsea, Massachusetts, a largely Latin-American enclave where nearly half the students are classed as English learners.

“If I had an eighth-grader who went on to Harvard after he graduated Chelsea High, and he had never read Shakespeare, he would be at a serious disadvantage,” Cirillo said.

And, she stresses, she’s arguing less in terms of assimilation than of challenge.

“If I don’t understand ‘The Great Gatsby’ — this story of the American dream — and the idea of a masculine reinvention in order to achieve something, then I don’t understand the mythology of America enough that I could critique it, that I can say, ‘I don’t want that,’” Cirillo said. “We’re thinking about building a language and culture of power and building access for our students.”

“Better readers are better at understanding the multiple points of view that might be held about a civic or a moral issue. They’re less likely to think that if you disagree with them, it’s because you’re stupid.”

Catherine Snow

The teachers and researchers who spoke to the Gazette were divided on whether Steinbeck, Fitzgerald, and Harper Lee still deserve their ubiquity.

“To Kill a Mockingbird,” which Lee published in 1960, could be considered the foundational American text of the ‘white savior’ archetype, Burt said. And, yes, Steinbeck was a Nobel laureate in literature, but with “Of Mice and Men” — “the point is that somebody cognitively disabled is probably going to commit a murder … the high school curriculum would be better off without that,” she said.

And while Burt praised “Gatsby” as a great option for many teens, Catherine Snow was less charitable.

 “I always hated that book,” she said.

Snow, a legendary literacy researcher, recently retired from the Harvard Graduate School of Education. She argued that hard evidence still shows real benefits that come from building readers.

Not only do well-read people perform better on tests of general knowledge — but as early as elementary school, Snow said, “better readers are better at understanding the multiple points of view that might be held about a civic or a moral issue. They’re less likely to think that if you disagree with them, it’s because you’re stupid … I think that’s pretty important.”

Digesting a text, analyzing tone and symbolism, understanding meaning and perspective — it’s all still useful. But, Snow said, some older books may no longer be ideal teaching tools.

“You can make all of those hoary texts relevant to students today,” Snow said. (True even of “Gatsby,” she joked: “Here’s a chance to learn about some really boring, worthless people, and how badly they’ve screwed up their lives.”)

“But,” Snow added, “an easier and perhaps more efficient approach would be to try to think about a selection of texts which are more automatically relevant that can be used to develop the same very important cognitive and linguistic and analytic skills.”

“Harry Potter” and “The Hunger Games” traffic, too, in “big, inherent, cultural themes and memes,” she said, and neither is “particularly easy reading.”

The cultural phenomena around those two series defied a decadeslong slump in pleasure reading among youth. In light of that trend, Cirillo and others see room to renovate the curriculum in the margins.

chart visualization

For Cirillo, stories by writers of color — from Toni Morrison to Junot Díaz — should by now be standard fare, part of a “new canon” to be read alongside the old one.

Burt’s chief concern, meanwhile, is the smartphone and its iron grip on our attention. “We’re living through a change in media that comes from a change in technology that is — unfortunately — at least half as consequential as the printing press,” Burt said. “I hate it; it makes me sad. But it’s not something we can wish away.”

Burt proposed shelving “Of Mice and Men” in favor of Frederick Douglass’ first autobiography, as “one piece of American prose literally everyone should have to read.”

Whether or not it can be neatly quantified, teachers of English still believe that there is something irreplaceable about profound immersion in the world of a book. Joining their number is M.G. Prezioso, a 2024 Ed School grad now conducting postdoctoral research on that very phenomenon.

In a recent journal article, Prezioso found a cyclical relationship between frequent reading and “story-world absorption” — a virtuous cycle of joy in reading that might lessen the need for external motivators.

And her ongoing research of grade-school students in Massachusetts and Pennsylvania has yielded early but promising correlations between that kind of absorption and skill at reading comprehension of the kind measured by a standardized test.

But that doesn’t mean abandoning what is already taught, Prezioso said. “There tends to be this dichotomy, first of all, between classic, canonical books versus books that are fun, as if canonical books can’t be engaging or dramatic or enjoyable to read.”

Prezioso was reminded of that in her surveys of high school students. What did they find most engrossing? “Harry Potter,” “The Hunger Games,” Edgar Allan Poe — and “Of Mice and Men.”

Why Malcolm X matters even more 60 years after his killing

Book cover and author Mark Whitaker.

Photo by Jennifer S. Altman

Nation & World

Why Malcolm X matters even more 60 years after his killing

New book by Mark Whitaker examines growth of artistic, political, cultural influence of controversial Civil Rights icon

Christina Pazzanese

Harvard Staff Writer

8 min read

Malcolm X was the provocative yet charismatic face of Black Nationalism and spokesman for the Nation of Islam before he was gunned down at an event in New York City on Feb. 21, 1965, after breaking with the group.

In a new book, “The Afterlife of Malcolm X: An Outcast Turned Icon’s Enduring Impact on America” (2025), journalist Mark Whitaker ’79, explores how the controversial Civil Rights figure’s stature and cultural legacy have only grown since his death.  

With dazzling verbal flair, Malcolm X’s advocacy for Black self-determination and racial pride stirred many of his contemporaries like Muhammed Ali, John Coltrane, Maya Angelou, and the founders of the Black Panther party, and helped spur the Black Arts Movement and the experimental genre known as “Free Jazz.”

Whitaker notes that even decades later Malcolm X’s words and ideas have continued to influence new generations of artists and activists, including NBA Hall of Famer Kareem Abdul-Jabbar, playwright August Wilson, filmmaker Spike Lee, pop star Beyoncé, and rappers Tupac Shakur and Kendrick Lamar, among others.

Whitaker recently spoke with the Gazette about why Malcolm X continues to shape American culture. The conversation has been edited for clarity and length.


You say Malcolm X’s cultural influence is even greater than when he was alive. Why is that?

You have to start with “The Autobiography of Malcolm X” [co-authored by Alex Haley]. Many more people, even in the ’60s but certainly subsequently, have gotten to know him through “The Autobiography” than anything else. It’s an extraordinary book. There’s a reason why it’s one of the most read and influential books of the last half century. There are few books by public figures of his stature where you experience this extraordinary personal journey he underwent, from losing his parents at a young age to becoming a street hustler and going to prison, and then turning his life around through the Nation of Islam, becoming a national figure, but then becoming disenchanted with the Nation and with Elijah Muhammad, going out on his own, making a pilgrimage to Mecca, traveling the world, reassessing all of his thoughts and beliefs about white people and separatism and so forth. So that’s extraordinary.

“One of the things that’s interesting is he keeps getting rediscovered generation after generation by young people.”

One of the things that’s interesting is he keeps getting rediscovered generation after generation by young people. I think he spoke to young people for a variety of reasons. One is the reality of race that he described was closer to what they were witnessing than the “I Have a Dream” speech.

There was a hard-headed realism about his analysis of race relations that spoke to young people. Even before you get to politics, his emphasis was on psychology, on pride, and on self-belief and on culture. The belief that Black folks had to start with celebrating themselves and their own culture and their own history — that was extremely appealing to subsequent generations.

I also think there was just something about the way he communicated. There’s a reason that the pioneers of hip-hop thought that you could take snippets of his speeches and put them in the middle of raps, and it would still sound like it belonged. There was something incredibly direct and pithy and honest about the way he communicated.

You put those elements together — his hard-headed analysis, his emphasis on culture and self-belief and pride, and his extraordinary communication — generation after generation of people rediscover that and feel that all of those things are still very powerful.

So many important Black artists, writers, musicians, and activists of that period had either a personal relationship with Malcolm X or said they had an epiphany of sorts after listening to him speak. Why do you think that was?

Part of it was that he did believe, very strongly, that politics is downstream from culture. That was something that he very much believed and preached.

It was interesting because his parents were Black nationalists of the Marcus Garvey generation. And so followers of Marcus Garvey of their generation basically said, “Things are so bad for Black people in America that they have to go someplace else, whether it be someplace in Africa or the Caribbean.” There was this idea of a Black homeland, someplace else that everybody would get on ships and go to.

“In his view, the way Black folks should practice nationalism is by staying in America but demanding their own culture, which began with studying their own history.”

Malcolm explicitly said, “We are a nation, but we belong here.” In his view, the way Black folks should practice nationalism is by staying in America but demanding their own culture, which began with studying their own history. In his separatist era, it was literally we have our own networks of support. He was a big believer in Black business by and for Black people. That was a cultural project as much as a political project.

He lived in an era when a lot of Black culture, even though it was separate from white culture, sought to emulate white culture. A lot of the societies and the rituals were Black versions of white rituals. And he said, “That’s a form of brainwashing. We shouldn’t seek to be like white people. We should have our own culture.”

So, starting with the Black Arts Movement and the “Free Jazz” movement in the ’60s, and then later, the hip-hop generation and today’s artists like Kendrick Lamar, Beyoncé, all the great artists who still invoke him, that’s the message they’re picking up on as much as his political message.

There’s also something just so supremely confident about him that people relate to. He was unapologetically who he was. He’s preaching Black pride and so forth with such supreme elegance and confidence and humor. That’s always appealing.

One chapter looks at Malcolm X as a hero to the political left and right. President Barack Obama has talked about how influential the autobiography was on him as a teenager, and Supreme Court Justice Clarence Thomas has also spoken about his attraction to Malcolm X and his message of self-determination when he was in college. Few political or cultural figures today have that kind of appeal. What do you attribute that to?

There are people on the left who revere Malcolm X who were appalled that Clarence Thomas would say he’s also a hero to him, and feel like Clarence Thomas just cherry-picked the parts of his message that are convenient to him — the emphasis on Black business, the skepticism about integration and so forth. I spent a lot of time researching that chapter and talking not to Thomas himself, but to his clerks and people who had written about his interest in Malcolm X, and I think it was sincere.

Malcolm X was a truth teller. I don’t think he was interested in being a hero to white people. He would go around saying things like, “I prefer the white racist who at least has his cards on the table to the white liberal who can’t be trusted.” And as we see today, people embrace people who attack the people who they oppose.

“Malcolm X came to Harvard in 1961 and then twice in 1964 to talk with Harvard Law School students and to debate faculty. He was known for his willingness to speak in all sorts of settings, whether a college campus, a street corner, or a TV talk show. ”

Would Malcolm X be surprised to find that he’s still so influential?

It’s a tricky thing for biographers to say what would he have thought. It’s presumptuous, but one of the things that is clear is that people at the time who were followers of his said his message and his influence will outlive him. Actor Ossie Davis said that in his eulogy. He said, “What we put in the ground now is only a seed which will rise up to meet us.”

Sociologist Harry Edwards, when he was organizing a Malcolm X day at San Jose State — this was a year after King’s assassination — people said, “Why all this fuss about Malcolm X and not about King?” And Harry Edwards said the thing about Malcolm X is it’s not so much what he did during his lifetime, it’s what he inspired in others, which will continue. There’s something about Malcolm that is still alive in the influence that he’s having on all these other people.

How AI could speed the development of RNA vaccines and other RNA therapies

Using artificial intelligence, MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies.

After training a machine-learning model to analyze thousands of existing delivery particles, the researchers used it to predict new materials that would work even better. The model also enabled the researchers to identify particles that would work well in different types of cells, and to discover ways to incorporate new types of materials into the particles.

“What we did was apply machine-learning tools to help accelerate the identification of optimal ingredient mixtures in lipid nanoparticles to help target a different cell type or help incorporate different materials, much faster than previously was possible,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

This approach could dramatically speed the process of developing new RNA vaccines, as well as therapies that could be used to treat obesity, diabetes, and other metabolic disorders, the researchers say.

Alvin Chan, a former MIT postdoc who is now an assistant professor at Nanyang Technological University, and Ameya Kirtane, a former MIT postdoc who is now an assistant professor at the University of Minnesota, are the lead authors of the new open-access study, which appears today in Nature Nanotechnology.

Particle predictions

RNA vaccines, such as the vaccines for SARS-CoV-2, are usually packaged in lipid nanoparticles (LNPs) for delivery. These particles protect mRNA from being broken down in the body and help it to enter cells once injected.

Creating particles that handle these jobs more efficiently could help researchers to develop even more effective vaccines. Better delivery vehicles could also make it easier to develop mRNA therapies that encode genes for proteins that could help to treat a variety of diseases.

In 2024, Traverso’s lab launched a multiyear research program, funded by the U.S. Advanced Research Projects Agency for Health (ARPA-H), to develop new ingestible devices that could achieve oral delivery of RNA treatments and vaccines.

“Part of what we’re trying to do is develop ways of producing more protein, for example, for therapeutic applications. Maximizing the efficiency is important to be able to boost how much we can have the cells produce,” Traverso says.

A typical LNP consists of four components — a cholesterol, a helper lipid, an ionizable lipid, and a lipid that is attached to polyethylene glycol (PEG). Different variants of each of these components can be swapped in to create a huge number of possible combinations. Changing up these formulations and testing each one individually is very time-consuming, so Traverso, Chan, and their colleagues decided to turn to artificial intelligence to help speed up the process.

“Most AI models in drug discovery focus on optimizing a single compound at a time, but that approach doesn’t work for lipid nanoparticles, which are made of multiple interacting components,” Chan says. “To tackle this, we developed a new model called COMET, inspired by the same transformer architecture that powers large language models like ChatGPT. Just as those models understand how words combine to form meaning, COMET learns how different chemical components come together in a nanoparticle to influence its properties — like how well it can deliver RNA into cells.”

To generate training data for their machine-learning model, the researchers created a library of about 3,000 different LNP formulations. The team tested each of these 3,000 particles in the lab to see how efficiently they could deliver their payload to cells, then fed all of this data into a machine-learning model.

After the model was trained, the researchers asked it to predict new formulations that would work better than existing LNPs. They tested those predictions by using the new formulations to deliver mRNA encoding a fluorescent protein to mouse skin cells grown in a lab dish. They found that the LNPs predicted by the model did indeed work better than the particles in the training data, and in some cases better than LNP formulations that are used commercially.

Accelerated development

Once the researchers showed that the model could accurately predict particles that would efficiently deliver mRNA, they began asking additional questions. First, they wondered if they could train the model on nanoparticles that incorporate a fifth component: a type of polymer known as branched poly beta amino esters (PBAEs).

Research by Traverso and his colleagues has shown that these polymers can effectively deliver nucleic acids on their own, so they wanted to explore whether adding them to LNPs could improve LNP performance. The MIT team created a set of about 300 LNPs that also include these polymers, which they used to train the model. The resulting model could then predict additional formulations with PBAEs that would work better.

Next, the researchers set out to train the model to make predictions about LNPs that would work best in different types of cells, including a type of cell called Caco-2, which is derived from colorectal cancer cells. Again, the model was able to predict LNPs that would efficiently deliver mRNA to these cells.

Lastly, the researchers used the model to predict which LNPs could best withstand lyophilization — a freeze-drying process often used to extend the shelf-life of medicines.

“This is a tool that allows us to adapt it to a whole different set of questions and help accelerate development. We did a large training set that went into the model, but then you can do much more focused experiments and get outputs that are helpful on very different kinds of questions,” Traverso says.

He and his colleagues are now working on incorporating some of these particles into potential treatments for diabetes and obesity, which are two of the primary targets of the ARPA-H funded project. Therapeutics that could be delivered using this approach include GLP-1 mimics with similar effects to Ozempic.

This research was funded by the GO Nano Marble Center at the Koch Institute, the Karl van Tassel Career Development Professorship, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and ARPA-H.

© Image: Courtesy of the researchers; MIT News

“What we did was apply machine-learning tools to help accelerate the identification of optimal ingredient mixtures in lipid nanoparticles to help target a different cell type or help incorporate different materials, much faster than previously was possible,” says Giovanni Traverso.

NUS community honoured with National Day Awards on Singapore’s 60th birthday

Mr Tan Gee Paw (MSc ISE '71), Adjunct Professor with the NUS College of Design and Engineering, and former Chairman of Changi Airport Group, was honoured with the Order of Nila Utama at the National Day Awards, earning this year's top accolade for his role in advancing Singapore’s development in areas such as aviation, rail transport and water security.

Professor Lui Pao Chuen (Science '65), Temasek Defence Professor at the Temasek Defence Systems Institute (TDSI) and Chairman of the Singapore Nuclear Research and Safety Institute (SNRSI), both key institutes at NUS, received the next highest award, the Distinguished Service Order. He was lauded for his contributions to Singapore across diverse fields, including education, science and technology, urban development, defence and infrastructure.    

The two recipients were among the nearly 200 members of the NUS community who were recognised for their merit and service to Singapore. In total, some 7,210 people were presented with the National Day Awards, on the year of Singapore’s 60th anniversary.

Mr Tan, who chaired Changi Airport Group from October 2020 to March 2025, was cited for steering the airport operator through the headwinds posed by COVID-19, maintaining its mission as a leading air hub and ensuring a swift recovery in post-pandemic traffic levels. Beyond shaping the development of the upcoming Terminal 5 and Changi East airport expansion, he was credited with spearheading the group’s efforts in innovation and sustainability, boosting its operational efficiency and overseeing key infrastructure upgrades.

During his term as Chairman of PUB, Singapore’s national water agency, from 2001 to 2017, he led bold initiatives and long-term infrastructure projects, such as the launch of NEWater and the Marina Barrage. As Advisor to the Land Transport Authority and the Ministry of Transport, he provided guidance on complex technical issues related to rail projects and strategic advice on organisational matters.

Prof Lui was recognised for his pivotal role in advancing Singapore’s research capabilities and technologies across multiple spheres – from nuclear science and water sustainability to national security – through his leadership as Chairman of SNRSI and the Public Utilities Board Project Evaluation Panel, as well as Advisor to the Ministry of Home Affairs, among others.

He had served with distinction in the Ministry of Defence for over 40 years, including 22 years as Chief Defence Scientist. A pioneering advocate of systems thinking, Prof Lui was reappointed Temasek Defence Professor at TDSI in 2024 in recognition of his outstanding scholarly accomplishments, a role in which he continues to enhance cross-disciplinary research capabilities in defence systems at NUS.

Spotlighting the NUS community

Other illustrious members of the NUS community also received recognition across the 22 award categories. They included many members of the University's Board of Trustees (BOT), both past and present.

NUS Pro-Chancellor Mr Gautam Banerjee was awarded the Public Service Star in his capacity as Director of the GIC Board, Chairman of the GIC Audit Committee, and a member of the GIC Human Resource and Organization Committee.

The Meritorious Service Medal was conferred upon BOT member Mr Tan Chong Meng (Engineering '83, MSc ISE '87) for his contributions as Chairman of the National University Health System, former Group CEO of PSA International, and former Chairman of JTC Corporation.

Another Trustee, Ms Lim Wan Yong, received the Long Service Medal in her capacity as Permanent Secretary (Education) at the Ministry of Education. A 2019 recipient of the Public Administration Medal (Silver) for her work at the Ministry of National Development, she has also held appointments in the Ministry of Social and Family Development and the Prime Minister’s Office.

Among the recipients of the Meritorious Service Medal were former Trustees Mr Davinder Singh SC (Law '82), Dr Noeleen Heyzer (Arts & Social Sciences '71, MSocSci '73, HonDLitt '25) and Dr Sudha Nair (Arts & Social Sciences '86, PhD '07).

Mr Singh, who served on the NUS Board from 2012 to 2015, was conferred the award in his capacity as Chairman of the Singapore International Arbitration Centre. Social scientist Dr Noeleen Heyzer is a 2025 Honorary Graduate of the Doctor of Letters from NUS, Rector of NUS’ Ridge View Residential College, and also former Under-Secretary-General of the United Nations. She served on the NUS Board from 2013 to 2019. Dr Sudha Nair, who taught at the Department of Social Work in the NUS Faculty of Arts and Social Sciences and served as Trustee from 2021 to 2024, received the award in her capacity as CEO of PAVE, a social service agency which provides an integrated service for individual and family protection.

Several leaders of NUS colleges, schools and institutes also received honours. 

Mr Janadas Devan (Arts & Social Sciences '79), Director of the Institute of Policy Studies at NUS, won the Meritorious Service Medal for his work as Deputy Secretary at the Prime Minister’s Office, and as Senior Advisor at the Ministry of Digital Development and Information.

Mr Tan Tee How (Business '83), the Chair of the Governing Council at NUS’ Centre for Trusted Internet and Community won the same medal for his contributions as Chairman of the Gambling Regulatory Authority of Singapore and Board Chairman of NHG Health.

Ms Janet Ang (Business '82), Chairman of the NUS-ISS Management Board, won the Public Service Star in her capacity as a Former Council Member of the Singapore Business Federation and Chairman of the Singapore Business Federation Foundation.

The Public Administration Medal (Silver) was awarded to Associate Professor Leong Ching (Arts & Social Sciences '92, MA '96, Public Policy PhD '13), NUS Vice Provost (Student Life) and Acting Dean of the Lee Kuan Yew School of Public Policy, and Professor Thomas M. Coffman, Dean of Duke-NUS Medical School.

Ms Euleen Goh, who previously served as Rector of the former Cinnamon College at NUS, won the Meritorious Service Medal for her contributions as the former Chairman of SATS.

Many of the University’s alumni also received awards. These include:

Meritorious Service Medal

  • Mr Hoong Wee Teck (Arts & Social Sciences ’87)

The Public Service Star (Bar)

  • Mr Chandra Mohan s/o Rethnam (Law '86)
  • Mr Chiang Heng Liang (Business '85, MFE '02)
  • Mr Lye Hoong Yip, Raymond (Law '90)
  • Mr Peter Sim Swee Yam (Law '80)

The Public Service Star

  • Mr Cheah Kok Keong (Arts & Social Sciences ’91)
  • Mr Chey Chor Wai (Business '76)
  • Mr Foo Hee Jug (Science '90)
  • Mr John Ng Peng Wah (Engineering '85, MSc ISE '90)
  • Assoc Prof Leong Kwok Fai Mark (Medicine '86)
  • Assoc Prof Stephen Phua Lye Huat (Law '88)

View the list of NUS-affiliated recipients here and visit this link for the winners from Duke-NUS.

Study sheds light on graphite’s lifespan in nuclear reactors

Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.

Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.

“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”

Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.

“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”

The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.

A long-studied, complex material

Ever since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.

Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.

“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”

But graphite also has its complexities.

“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”

Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.

Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.

“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”

For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.

The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.

“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”

Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.

“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”

The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.

“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”

From research to reactors

The researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.

Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.

“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”

The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.

“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”

This work was supported, in part, by the U.S. Department of Energy.

© Image: MIT News; iStock

New research uncovered a link between properties of graphite and how the material behaves in response to radiation. “It seems like after graphite is irradiated for so long, it starts recovering,” says Sean Fayfar.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA).

Using generative AI algorithms, the research team designed more than 36 million possible compounds and computationally screened them for antimicrobial properties. The top candidates they discovered are structurally distinct from any existing antibiotics, and they appear to work by novel mechanisms that disrupt bacterial cell membranes.

This approach allowed the researchers to generate and evaluate theoretical compounds that have never been seen before — a strategy that they now hope to apply to identify and design compounds with activity against other species of bacteria.

“We’re excited about the new possibilities that this project opens up for antibiotics development. Our work shows the power of AI from a drug design standpoint, and enables us to exploit much larger chemical spaces that were previously inaccessible,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering, and a member of the Broad Institute.

Collins is the senior author of the study, which appears today in Cell. The paper’s lead authors are MIT postdoc Aarti Krishnan, former postdoc Melis Anahtar ’08, and Jacqueline Valeri PhD ’23.

Exploring chemical space

Over the past 45 years, a few dozen new antibiotics have been approved by the FDA, but most of these are variants of existing antibiotics. At the same time, bacterial resistance to many of these drugs has been growing. Globally, it is estimated that drug-resistant bacterial infections cause nearly 5 million deaths per year.

In hopes of finding new antibiotics to fight this growing problem, Collins and others at MIT’s Antibiotics-AI Project have harnessed the power of AI to screen huge libraries of existing chemical compounds. This work has yielded several promising drug candidates, including halicin and abaucin.

To build on that progress, Collins and his colleagues decided to expand their search into molecules that can’t be found in any chemical libraries. By using AI to generate hypothetically possible molecules that don’t exist or haven’t been discovered, they realized that it should be possible to explore a much greater diversity of potential drug compounds.

In their new study, the researchers employed two different approaches: First, they directed generative AI algorithms to design molecules based on a specific chemical fragment that showed antimicrobial activity, and second, they let the algorithms freely generate molecules, without having to include a specific fragment.

For the fragment-based approach, the researchers sought to identify molecules that could kill N. gonorrhoeae, a Gram-negative bacterium that causes gonorrhea. They began by assembling a library of about 45 million known chemical fragments, consisting of all possible combinations of 11 atoms of carbon, nitrogen, oxygen, fluorine, chlorine, and sulfur, along with fragments from Enamine’s REadily AccessibLe (REAL) space.

Then, they screened the library using machine-learning models that Collins’ lab has previously trained to predict antibacterial activity against N. gonorrhoeae. This resulted in nearly 4 million fragments. They narrowed down that pool by removing any fragments predicted to be cytotoxic to human cells, displayed chemical liabilities, and were known to be similar to existing antibiotics. This left them with about 1 million candidates.

“We wanted to get rid of anything that would look like an existing antibiotic, to help address the antimicrobial resistance crisis in a fundamentally different way. By venturing into underexplored areas of chemical space, our goal was to uncover novel mechanisms of action,” Krishnan says.

Through several rounds of additional experiments and computational analysis, the researchers identified a fragment they called F1 that appeared to have promising activity against N. gonorrhoeae. They used this fragment as the basis for generating additional compounds, using two different generative AI algorithms.

One of those algorithms, known as chemically reasonable mutations (CReM), works by starting with a particular molecule containing F1 and then generating new molecules by adding, replacing, or deleting atoms and chemical groups. The second algorithm, F-VAE (fragment-based variational autoencoder), takes a chemical fragment and builds it into a complete molecule. It does so by learning patterns of how fragments are commonly modified, based on its pretraining on more than 1 million molecules from the ChEMBL database.

Those two algorithms generated about 7 million candidates containing F1, which the researchers then computationally screened for activity against N. gonorrhoeae. This screen yielded about 1,000 compounds, and the researchers selected 80 of those to see if they could be produced by chemical synthesis vendors. Only two of these could be synthesized, and one of them, named NG1, was very effective at killing N. gonorrhoeae in a lab dish and in a mouse model of drug-resistant gonorrhea infection.

Additional experiments revealed that NG1 interacts with a protein called LptA, a novel drug target involved in the synthesis of the bacterial outer membrane. It appears that the drug works by interfering with membrane synthesis, which is fatal to cells.

Unconstrained design

In a second round of studies, the researchers explored the potential of using generative AI to freely design molecules, using Gram-positive bacteria, S. aureus as their target.

Again, the researchers used CReM and VAE to generate molecules, but this time with no constraints other than the general rules of how atoms can join to form chemically plausible molecules. Together, the models generated more than 29 million compounds. The researchers then applied the same filters that they did to the N. gonorrhoeae candidates, but focusing on S. aureus, eventually narrowing the pool down to about 90 compounds.

They were able to synthesize and test 22 of these molecules, and six of them showed strong antibacterial activity against multi-drug-resistant S. aureus grown in a lab dish. They also found that the top candidate, named DN1, was able to clear a methicillin-resistant S. aureus (MRSA) skin infection in a mouse model. These molecules also appear to interfere with bacterial cell membranes, but with broader effects not limited to interaction with one specific protein.

Phare Bio, a nonprofit that is also part of the Antibiotics-AI Project, is now working on further modifying NG1 and DN1 to make them suitable for additional testing.

“In a collaboration with Phare Bio, we are exploring analogs, as well as working on advancing the best candidates preclinically, through medicinal chemistry work,” Collins says. “We are also excited about applying the platforms that Aarti and the team have developed toward other bacterial pathogens of interest, notably Mycobacterium tuberculosis and Pseudomonas aeruginosa.”

The research was funded, in part, by the U.S. Defense Threat Reduction Agency, the National Institutes of Health, the Audacious Project, Flu Lab, the Sea Grape Foundation, Rosamund Zander and Hansjorg Wyss for the Wyss Foundation, and an anonymous donor.

© Credit: iStock, MIT News

With help from artificial intelligence, MIT researchers have discovered novel antibiotics that can combat two hard-to-treat infections: a drug-resistant form of gonorrhea and multi-drug-resistant Staphylococcus aureus (MRSA).

Glowing algae reveal the geometry of life

The multicellular model organism Volvox, showing individual somatic cells (isolated magenta circles distributed over the entire surface), daughter spheroids (a few larger clusters of magenta circles), and compartments around each somatic cell (green).

In a study published in the journal Proceedings of the National Academy of Sciences (PNAS), a team of British and German scientists revealed the structure of the extracellular matrix in Volvox carteri, a type of green algae that is often used to study how multicellular organisms evolved from single-celled ancestors.

The extracellular matrix (ECM) is a scaffold-like material that surrounds cells, providing physical support, influencing shape, and playing an important role in development and signalling. Found in animals, plants, fungi and algae, it also played a vital part in the transition from unicellular to multicellular life.

Because the ECM exists outside the cells that produce it, scientists believe it forms through self-assembly: a process still not fully understood, even in the simplest organisms.

To investigate, researchers at the University of Bielefeld genetically engineered a strain of Volvox in which a key ECM protein called pherophorin II was made fluorescent so the matrix’s structure could be clearly seen under a microscope.

What they saw was an intricate foam-like network of rounded compartments that wrapped around each of Volvox’s roughly 2,000 somatic, or non-reproductive, cells.

Working with mathematicians at the University of Cambridge, the team used machine learning to quantify the geometry of these compartments. The data revealed a stochastic, or randomly influenced, growth pattern that shares similarities with the way foams expand when hydrated.

These shapes followed a statistical pattern that also appears in materials like grains and emulsions, and in biological tissues. The findings suggest that while individual cells produce ECM proteins at uneven rates, the overall organism maintains a regular, spherical form.

That coexistence – between noisy behaviour at the level of single cells and precise geometry at the level of the whole organism – raises new questions about how multicellular life manages to build reliable forms from unreliable parts.

“Our results provide quantitative information relating to a fundamental question in developmental biology: how do cells make structures external to themselves in a robust and accurate manner,” said Professor Raymond E Goldstein from Cambridge’s Department of Applied Mathematics and Theoretical Physics, who co-led the research. “It also shows the exciting results we can achieve when biologists, physicists and mathematicians work together on understanding the mysteries of life.”

“By tracking a single structural protein, we gained insight into the principles behind the self-organisation of the extracellular matrix,” said Professor Armin Hallmann from the University of Bielefeld, who co-led the research. “Its geometry gives us a meaningful readout of how the organism develops as it grows.”

The research was carried out by postdoctoral researchers Dr Benjamin von der Heyde and Dr Eva Laura von der Heyde and Hallmann in Bielefeld, working with Cambridge PhD student Anand Srinivasan, postdoctoral researcher Dr Sumit Kumar Birwa, Senior Research Associate Dr Steph Höhn and Goldstein, the Alan Turing Professor of Complex Physical Systems in Cambridge’s Department of Applied Mathematics and Theoretical Physics.

The project was supported in part by Wellcome and the John Templeton Foundation. Raymond Goldstein is a Fellow of Churchill College, Cambridge.

 

Reference:
B von der Heyde, A Srinivasan et al. ‘Spatiotemporal distribution of the glycoprotein pherophorin II reveals stochastic geometry of the growing ECM of Volvox carteri,’ Proceedings of the National Academy of Science (2025). DOI: 10.1073/pnas.2425759122

Researchers have captured the first clear view of the hidden architecture that helps shape a simple multicellular organism, showing how cells work together to build complex life forms.

Volvox. The isolated magenta circles are individual somatic cells, surrounded by green compartments, while the larger magenta circles are daughter spheroids

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Brain implants that don’t leave scars

Health

Brain implants that don’t leave scars

Axoft's flexible neural probe is displayed on a finger tip for perspective.

Axoft’s flexible brain implant. 

Axoft Inc.

Kirsten Mabry

Harvard Office of Technology Development

6 min read

Harvard startup is developing a softer device to monitor head injuries

Traumatic brain injuries vary in severity from mild to life-threatening, but neurologists have limited tools to assess the damage. While examinations and external imaging can help, neural probes — devices that create brain-computer interfaces — are even better. The problem? They are made of rigid materials that scar the brain.

Axoft, a startup launched out of Harvard in 2021, is developing a softer alternative, one the company’s researchers say could be inserted into the brain without disturbing its gel-like consistency but is durable enough to deliver accurate neural data.

“With a brain-computer interface, we can determine very precisely what’s happening in the brains of the patients — if they are conscious, if they are not conscious, if they are vegetative, if they are recovering, or if their state is degrading,” said Paul Le Floch, co-founder and CEO of Axoft, who received his Ph.D. in materials science from Harvard.

Clinicians have used neural probes for decades. When inserted into the brain, they measure electrical activity with much more accuracy than external neural imaging. But traditionally, neural probes have been made of rigid materials, which damage the surrounding, highly flexible brain tissue — like razor blades in gel, said Le Floch. Damage to the brain makes neural probes less effective, because the brain responds by surrounding them with scar tissue. Encapsulated in that more rigid tissue, the probes cannot communicate as readily with the neurons around them. Plus, rigid devices can only stay implanted for a short time before they significantly scar the brain. As more sensors are added to a neural probe — an essential element of gathering as much brain activity data as possible — the probes become even more rigid.

Traditionally, neural probes have been made of rigid materials, which damage the surrounding, highly flexible brain tissue — like razor blades in gel.

Le Floch and his collaborators understood that they needed a softer alternative to existing neural probes. “The problem is: Soft materials are not very high-performance,” he said.

During a Ph.D. focused on material science and polymers at Harvard, Le Floch began working as a graduate student in the lab of Jia Liu, an assistant professor of bioengineering at the John A. Paulson School of Engineering and Applied Sciences. Le Floch and Liu focused on an intractable problem: engineering neural probes that worked better for the brain.

Le Floch and Liu collaborated with Tianyang Ye, Ph.D. ’20, a graduate student and then postdoctoral scholar at Harvard specializing in nanoelectronics, as well as a fellow at the Office of Technology Development (OTD), where he worked on commercialization strategies for academic innovations. Ye is now Axoft’s chief technology officer, as well as a co-founder. While Le Floch engineered a higher-performance, soft material that could be inserted into the brain without harming it, Ye designed the electronics that could transmit the data for analysis.

The resulting neural probe is “very biocompatible, because it’s so small, but also very soft,” said Le Floch. “It creates less damage within tissues over time.”

Paul Le Floch and Jia Liu.
Paul Le Floch (left) and Jia Liu.

Axoft’s novel material, Fleuron, is thousands to millions of times softer and more flexible than the material used in modern neural probes. At the same time, Fleuron is a photoresist, applicable for the chip-fabrication process. As a result, the probe can easily fit more than 1,000 sensors, delivering precise brain-signal data to clinicians. 

“In the last few decades, we’ve gone from measuring one neuron, to 10 neurons, to hundreds of neurons — now we’re getting into thousands,” said Le Floch. Those greater multitudes allow researchers to “learn more about the brain and develop new diagnoses and therapies.”

Axoft is working to double the number of electrodes its probe can host every year as it continues to develop the technology. “This will significantly increase the number of neurons Axoft’s probes can measure and stimulate,” said Liu, who helped co-found the company and joined as a scientific adviser.

Brain implants are not necessary for every patient with neurological damage, Le Floch says, but the company has already experienced significant interest from neurologists who have struggled to measure the brain activity of unresponsive patients with acute and traumatic brain injuries.

“We see a big need from a patient perspective.”

Paul Le Floch, Axoft CEO

The impact of the startup’s work was clear from the beginning, according to Christopher Petty, OTD director of business development in physical sciences. “From our point of view, we’re always talking about this mission of taking academically generated knowledge and making a difference in the world with it. This is that in spades,” he said. “That’s the point of everything we’re doing.”

OTD safeguarded the intellectual property of the core discoveries, connected Axoft’s team with potential investors and structured the startup’s license to further develop the technology, while helping its founding researchers think about real-world applications and the journey from testing to commercialization.

Helping a medical device startup flourish, says Petty, differs from the process for a software startup. The clinical trials necessary for approval can take a significant amount of time and cost a lot of money, but there’s also a much more clearly defined path to market. “There’s a clear set of milestones,” Petty said.

Since its founding, Axoft has been working to hurdle those milestones. The company has raised more than $18 million in funding thus far. In 2025, it completed its first human trial at the Panama Clinic in Panama, which demonstrated that the implants were safe to insert and remove and didn’t create additional risks for the brain in the process. The team also determined that the probe could differentiate when patients are conscious or unconscious (due to anesthesia), the latter of which mimics a coma-like state. Within a few minutes, the team was able to measure brain states in the way a functional MRI might over several hours.

Now, in order to generate more preclinical data, Axoft is working with clinicians at Massachusetts General Hospital on porcine models of traumatic brain injury. Le Floch expects Axoft will be able to begin another in-human study with the hospital in the next year.

In 2027, Axoft is targeting an FDA-managed clinical trial focused on individuals with traumatic brain injuries, in whom the device can measure recovery and consciousness. If all goes well, the devices could be available to physicians by 2028. Le Floch believes the implants could quickly scale to hundreds of patients.

“We see a big need from a patient perspective, and there is already an ecosystem in hospitals for using neuromonitoring devices,” he said.


This research received federal funding from the National Science Foundation.

In touch with our emotions, finally

Work & Economy

In touch with our emotions, finally

Insights at intersection of gender, anger, and risk are just one example of shift in science of decision making

Sy Boles

Harvard Staff Writer

5 min read
Jennifer Lerner.

Jennifer Lerner is the Thornton F. Bradshaw Professor of Public Policy, Decision Science, and Management at Harvard Kennedy School.

Niles Singer/Harvard Staff Photographer

Tightrope series

A series exploring how risk shapes our decisions.

Letting raw emotion drive financial decisions sounds like a recipe for disaster. But Jennifer Lerner, the Thornton F. Bradshaw Professor of Public Policy, Decision Science, and Management at the Kennedy School, found that anger turned out well, at least for men, in a computerized gambling game.

Lerner and National Institutes of Health scientist Rebecca Ferrer (a former student) co-led a set of experiments using the Balloon Analog Risk Task, in which participants earn more money each time they add air to a virtual balloon, but lose it all if they go too far and burst the balloon.

When males were primed for anger, they took bigger risks and walked away with fatter wallets than did neutral-emotion males or angry females. Unlike previous studies that demonstrated causal effects of anger on lowering risk perceptions and reducing the likelihood of taking protective actions among males and females, these experiments focused on actual risk-taking behavior — revealing that anger drove bolder bets, primarily among men.

Correlating gender and emotion is always slippery, Lerner noted. After all, you can’t randomly assign adults to the category of male and female, never mind tease out whether differences are due to biology, socialization, culture, or something else entirely. But the findings raise interesting questions about how gender, emotions, and risk intersect in high-stakes environments like entrepreneurship or the stock market, she said.

“There’s a large and interesting debate about whether males are more risk-taking in general.”

“There’s a large and interesting debate about whether males are more risk-taking in general, which our work only partially addresses,” Lerner said. “It looks only at the role of anger in financial risk-taking and the gender differences there.” 

Wanting the results to be understood in broader context, she stressed two points.

“Whether risk-taking turns out to be good or bad depends entirely on the situation,” Lerner said. “We designed our studies to reward risk-taking, but there are many real-world situations where caution would be a better strategy.

“Also, while males and females may differ on average in how anger influences their financial risk-taking, across most decisions there’s more variation within each gender than between genders. So, knowing someone’s gender will tell you less about their decision-making than will understanding their individual traits or social/cultural context.”

Emotion has been massively understudied as a factor in decision-making, according to Lerner. “Even though,” she said, “if you ask people on the street, ‘What’s important to understand in decision-making,’ they often say ‘emotion.’” 

Her work plays a role in a recent shift — studies examining how emotion affects decision-making have dramatically increased, recognizing that emotion can be adaptive or maladaptive. More generally, emotion now appears prominently in emerging models of brain, mind, and behavior.

Science is catching up to the widespread use of emotion to shape behavior in marketing campaigns.

In other words, science is catching up to the widespread use of emotion to shape behavior in marketing campaigns. In student-led studies, Lerner’s lab has examined emotionally evocative public health campaigns designed to communicate the risks associated with tobacco use. One study, led by Charlie Dorison, Ph.D. ’20, found that inducing certain kinds of sadness can backfire, inadvertently increasing smoking. Another, led by Ke Wang, Ph.D. ’24, found that gratitude can play a powerful role in encouraging smoking cessation. Both studies come from a broader stream of work in Lerner’s lab examining ways in which emotion influences appetitive risk behaviors (e.g., smoking, vaping, gambling).

In Lerner’s own life, being well-informed about risk gave her something very valuable: her daughter. As a child, Lerner was diagnosed with lupus. Among other effects of the autoimmune disease, she was told she should never have biological children: The risk of miscarriage was high, and her health could be in serious jeopardy. She considered adoption, but soon learned that having lupus would significantly lower the odds that she would ever be selected as an adoptive parent.

Lerner could have let generalized fear guide her decision about having a child. Instead, she and her husband dug into the medical research. 

Their plan was to examine feelings shaped by doctors’ warnings and common beliefs before taking a hard look at the actual risks. Was their fear incidental or integral? How much uncertainty could they comfortably tolerate?

“We just analyzed everything, all the scientific studies we could find,” Lerner said. “And we decided, given where my health was at the time and the medications I was on, we could accept the risks.” Two months ago, that baby — now grown up — graduated from college.

Professionally, Lerner studies risk across a variety of domains, including health, economics, national security, and (most recently) climate change. She serves on several councils and boards, including the board of the Forecasting Research Institute, a nonprofit attempting to develop methodologies for quantifying the risk of existential threats (e.g., AI takeover). She believes decision-making skills are essential life skills for everyone, and should be taught at a young age. For that reason, she also volunteers as an ambassador for the Alliance for Decision Education, a nonprofit providing free access to decision-making curricula for K-12 schools. Much of her own time is spent with professionals from around the world who work in decision-intensive roles — “from financial analysts to firefighters,” as she puts it — in an executive education course she teaches.

“In today’s world, the ability to leverage information effectively is a crucial skill,” she said. “That means being clear on how to estimate uncertainty, how to judge your confidence in those estimates, and how to recognize the myriad — helpful or unhelpful — ways emotion may shape judgments. These aren’t just tools for leaders or analysts — they’re for all of us.”

Also in this series:

A new way to test how well AI systems classify text

Is this movie review a rave or a pan? Is this news story about business or technology? Is this online chatbot conversation veering off into giving financial advice? Is this online medical information site giving out misinformation?

These kinds of automated conversations, whether they involve seeking a movie or restaurant review or getting information about your bank account or health records, are becoming increasingly prevalent. More than ever, such evaluations are being made by highly sophisticated algorithms, known as text classifiers, rather than by human beings. But how can we tell how accurate these classifications really are?

Now, a team at MIT’s Laboratory for Information and Decision Systems (LIDS) has come up with an innovative approach to not only measure how well these classifiers are doing their job, but then go one step further and show how to make them more accurate.

The new evaluation and remediation software was led and developed by Lei Xu alongside the research conducted by Sarah Alnegheimish, Kalyan Veeramachaneni, a principal research scientist at LIDS and senior author, with two others. The software package is being made freely available for download by anyone who wants to use it.

A standard method for testing these classification systems is to create what are known as synthetic examples — sentences that closely resemble ones that have already been classified. For example, researchers might take a sentence that has already been tagged by a classifier program as being a rave review, and see if changing a word or a few words while retaining the same meaning could fool the classifier into deeming it a pan. Or a sentence that was determined to be misinformation might get misclassified as accurate. This ability to fool the classifiers makes these adversarial examples.

People have tried various ways to find the vulnerabilities in these classifiers, Veeramachaneni says. But existing methods of finding these vulnerabilities have a hard time with this task and miss many examples that they should catch, he says.

Increasingly, companies are trying to use such evaluation tools in real time, monitoring the output of chatbots used for various purposes to try to make sure they are not putting out improper responses. For example, a bank might use a chatbot to respond to routine customer queries such as checking account balances or applying for a credit card, but it wants to ensure that its responses could never be interpreted as financial advice, which could expose the company to liability. “Before showing the chatbot’s response to the end user, they want to use the text classifier to detect whether it’s giving financial advice or not,” Veeramachaneni says. But then it’s important to test that classifier to see how reliable its evaluations are.

“These chatbots, or summarization engines or whatnot are being set up across the board,” he says, to deal with external customers and within an organization as well, for example providing information about HR issues. It’s important to put these text classifiers into the loop to detect things that they are not supposed to say, and filter those out before the output gets transmitted to the user.

That’s where the use of adversarial examples comes in — those sentences that have already been classified but then produce a different response when they are slightly modified while retaining the same meaning. How can people confirm that the meaning is the same? By using another large language model (LLM) that interprets and compares meanings. So, if the LLM says the two sentences mean the same thing, but the classifier labels them differently, “that is a sentence that is adversarial — it can fool the classifier,” Veeramachaneni says. And when the researchers examined these adversarial sentences, “we found that most of the time, this was just a one-word change,” although the people using LLMs to generate these alternate sentences often didn’t realize that.

Further investigation, using LLMs to analyze many thousands of examples, showed that certain specific words had an outsized influence in changing the classifications, and therefore the testing of a classifier’s accuracy could focus on this small subset of words that seem to make the most difference. They found that one-tenth of 1 percent of all the 30,000 words in the system’s vocabulary could account for almost half of all these reversals of classification, in some specific applications.

Lei Xu PhD ’23, a recent graduate from LIDS who performed much of the analysis as part of his thesis work, “used a lot of interesting estimation techniques to figure out what are the most powerful words that can change the overall classification, that can fool the classifier,” Veeramachaneni says. The goal is to make it possible to do much more narrowly targeted searches, rather than combing through all possible word substitutions, thus making the computational task of generating adversarial examples much more manageable. “He’s using large language models, interestingly enough, as a way to understand the power of a single word.”

Then, also using LLMs, he searches for other words that are closely related to these powerful words, and so on, allowing for an overall ranking of words according to their influence on the outcomes. Once these adversarial sentences have been found, they can be used in turn to retrain the classifier to take them into account, increasing the robustness of the classifier against those mistakes.

Making classifiers more accurate may not sound like a big deal if it’s just a matter of classifying news articles into categories, or deciding whether reviews of anything from movies to restaurants are positive or negative. But increasingly, classifiers are being used in settings where the outcomes really do matter, whether preventing the inadvertent release of sensitive medical, financial, or security information, or helping to guide important research, such as into properties of chemical compounds or the folding of proteins for biomedical applications, or in identifying and blocking hate speech or known misinformation.

As a result of this research, the team introduced a new metric, which they call p, which provides a measure of how robust a given classifier is against single-word attacks. And because of the importance of such misclassifications, the research team has made its products available as open access for anyone to use. The package consists of two components: SP-Attack, which generates adversarial sentences to test classifiers in any particular application, and SP-Defense, which aims to improve the robustness of the classifier by generating and using adversarial sentences to retrain the model.

In some tests, where competing methods of testing classifier outputs allowed a 66 percent success rate by adversarial attacks, this team’s system cut that attack success rate almost in half, to 33.7 percent. In other applications, the improvement was as little as a 2 percent difference, but even that can be quite important, Veeramachaneni says, since these systems are being used for so many billions of interactions that even a small percentage can affect millions of transactions.

The team’s results were published on July 7 in the journal Expert Systems in a paper by Xu, Veeramachaneni, and Alnegheimish of LIDS, along with Laure Berti-Equille at IRD in Marseille, France, and Alfredo Cuesta-Infante at the Universidad Rey Juan Carlos, in Spain. 

© Image: iStock

A new approach measures how well text classifiers are doing their job, and shows how to make them more accurate.

A new way to test how well AI systems classify text

Is this movie review a rave or a pan? Is this news story about business or technology? Is this online chatbot conversation veering off into giving financial advice? Is this online medical information site giving out misinformation?

These kinds of automated conversations, whether they involve seeking a movie or restaurant review or getting information about your bank account or health records, are becoming increasingly prevalent. More than ever, such evaluations are being made by highly sophisticated algorithms, known as text classifiers, rather than by human beings. But how can we tell how accurate these classifications really are?

Now, a team at MIT’s Laboratory for Information and Decision Systems (LIDS) has come up with an innovative approach to not only measure how well these classifiers are doing their job, but then go one step further and show how to make them more accurate.

The new evaluation and remediation software was led and developed by Lei Xu alongside the research conducted by Sarah Alnegheimish, Kalyan Veeramachaneni, a principal research scientist at LIDS and senior author, with two others. The software package is being made freely available for download by anyone who wants to use it.

A standard method for testing these classification systems is to create what are known as synthetic examples — sentences that closely resemble ones that have already been classified. For example, researchers might take a sentence that has already been tagged by a classifier program as being a rave review, and see if changing a word or a few words while retaining the same meaning could fool the classifier into deeming it a pan. Or a sentence that was determined to be misinformation might get misclassified as accurate. This ability to fool the classifiers makes these adversarial examples.

People have tried various ways to find the vulnerabilities in these classifiers, Veeramachaneni says. But existing methods of finding these vulnerabilities have a hard time with this task and miss many examples that they should catch, he says.

Increasingly, companies are trying to use such evaluation tools in real time, monitoring the output of chatbots used for various purposes to try to make sure they are not putting out improper responses. For example, a bank might use a chatbot to respond to routine customer queries such as checking account balances or applying for a credit card, but it wants to ensure that its responses could never be interpreted as financial advice, which could expose the company to liability. “Before showing the chatbot’s response to the end user, they want to use the text classifier to detect whether it’s giving financial advice or not,” Veeramachaneni says. But then it’s important to test that classifier to see how reliable its evaluations are.

“These chatbots, or summarization engines or whatnot are being set up across the board,” he says, to deal with external customers and within an organization as well, for example providing information about HR issues. It’s important to put these text classifiers into the loop to detect things that they are not supposed to say, and filter those out before the output gets transmitted to the user.

That’s where the use of adversarial examples comes in — those sentences that have already been classified but then produce a different response when they are slightly modified while retaining the same meaning. How can people confirm that the meaning is the same? By using another large language model (LLM) that interprets and compares meanings. So, if the LLM says the two sentences mean the same thing, but the classifier labels them differently, “that is a sentence that is adversarial — it can fool the classifier,” Veeramachaneni says. And when the researchers examined these adversarial sentences, “we found that most of the time, this was just a one-word change,” although the people using LLMs to generate these alternate sentences often didn’t realize that.

Further investigation, using LLMs to analyze many thousands of examples, showed that certain specific words had an outsized influence in changing the classifications, and therefore the testing of a classifier’s accuracy could focus on this small subset of words that seem to make the most difference. They found that one-tenth of 1 percent of all the 30,000 words in the system’s vocabulary could account for almost half of all these reversals of classification, in some specific applications.

Lei Xu PhD ’23, a recent graduate from LIDS who performed much of the analysis as part of his thesis work, “used a lot of interesting estimation techniques to figure out what are the most powerful words that can change the overall classification, that can fool the classifier,” Veeramachaneni says. The goal is to make it possible to do much more narrowly targeted searches, rather than combing through all possible word substitutions, thus making the computational task of generating adversarial examples much more manageable. “He’s using large language models, interestingly enough, as a way to understand the power of a single word.”

Then, also using LLMs, he searches for other words that are closely related to these powerful words, and so on, allowing for an overall ranking of words according to their influence on the outcomes. Once these adversarial sentences have been found, they can be used in turn to retrain the classifier to take them into account, increasing the robustness of the classifier against those mistakes.

Making classifiers more accurate may not sound like a big deal if it’s just a matter of classifying news articles into categories, or deciding whether reviews of anything from movies to restaurants are positive or negative. But increasingly, classifiers are being used in settings where the outcomes really do matter, whether preventing the inadvertent release of sensitive medical, financial, or security information, or helping to guide important research, such as into properties of chemical compounds or the folding of proteins for biomedical applications, or in identifying and blocking hate speech or known misinformation.

As a result of this research, the team introduced a new metric, which they call p, which provides a measure of how robust a given classifier is against single-word attacks. And because of the importance of such misclassifications, the research team has made its products available as open access for anyone to use. The package consists of two components: SP-Attack, which generates adversarial sentences to test classifiers in any particular application, and SP-Defense, which aims to improve the robustness of the classifier by generating and using adversarial sentences to retrain the model.

In some tests, where competing methods of testing classifier outputs allowed a 66 percent success rate by adversarial attacks, this team’s system cut that attack success rate almost in half, to 33.7 percent. In other applications, the improvement was as little as a 2 percent difference, but even that can be quite important, Veeramachaneni says, since these systems are being used for so many billions of interactions that even a small percentage can affect millions of transactions.

The team’s results were published on July 7 in the journal Expert Systems in a paper by Xu, Veeramachaneni, and Alnegheimish of LIDS, along with Laure Berti-Equille at IRD in Marseille, France, and Alfredo Cuesta-Infante at the Universidad Rey Juan Carlos, in Spain. 

© Image: iStock

A new approach measures how well text classifiers are doing their job, and shows how to make them more accurate.

MIT gears up to transform manufacturing

“Manufacturing is the engine of society, and it is the backbone of robust, resilient economies,” says John Hart, head of MIT’s Department of Mechanical Engineering (MechE) and faculty co-director of the MIT Initiative for New Manufacturing (INM). “With manufacturing a lively topic in today’s news, there’s a renewed appreciation and understanding of the importance of manufacturing to innovation, to economic and national security, and to daily lives.”

Launched this May, INM will “help create a transformation of manufacturing through new technology, through development of talent, and through an understanding of how to scale manufacturing in a way that enables imparts higher productivity and resilience, drives adoption of new technologies, and creates good jobs,” Hart says.

INM is one of MIT’s strategic initiatives and builds on the successful three-year-old Manufacturing@MIT program. “It’s a recognition by MIT that manufacturing is an Institute-wide theme and an Institute-wide priority, and that manufacturing connects faculty and students across campus,” says Hart. Alongside Hart, INM’s faculty co-directors are Institute Professor Suzanne Berger and Chris Love, professor of chemical engineering.

The initiative is pursuing four main themes: reimagining manufacturing technologies and systems, elevating the productivity and human experience of manufacturing, scaling up new manufacturing, and transforming the manufacturing base.

Breaking manufacturing barriers for corporations

Amgen, Autodesk, Flex, GE Vernova, PTC, Sanofi, and Siemens are founding members of INM’s industry consortium. These industry partners will work closely with MIT faculty, researchers, and students across many aspects of manufacturing-related research, both in broad-scale initiatives and in particular areas of shared interests. Membership requires a minimum three-year commitment of $500,000 a year to manufacturing-related activities at MIT, including the INM membership fee of $275,000 per year, which supports several core activities that engage the industry members.

One major thrust for INM industry collaboration is the deployment and adoption of AI and automation in manufacturing. This effort will include seed research projects at MIT, collaborative case studies, and shared strategy development.

INM also offers companies participation in the MIT-wide New Manufacturing Research effort, which is studying the trajectories of specific manufacturing industries and examining cross-cutting themes such as technology and financing.

Additionally, INM will concentrate on education for all professions in manufacturing, with alliances bringing together corporations, community colleges, government agencies, and other partners. “We'll scale our curriculum to broader audiences, from aspiring manufacturing workers and aspiring production line supervisors all the way up to engineers and executives,” says Hart.

In workforce training, INM will collaborate with companies broadly to help understand the challenges and frame its overall workforce agenda, and with individual firms on specific challenges, such as acquiring suitably prepared employees for a new factory.

Importantly, industry partners will also engage directly with students. Founding member Flex, for instance, hosted MIT researchers and students at the Flex Institute of Technology in Sorocaba, Brazil, developing new solutions for electronics manufacturing.

“History shows that you need to innovate in manufacturing alongside the innovation in products,” Hart comments. “At MIT, as more students take classes in manufacturing, they’ll think more about key manufacturing issues as they decide what research problems they want to solve, or what choices they make as they prototype their devices. The same is true for industry — companies that operate at the frontier of manufacturing, whether through internal capabilities or their supply chains, are positioned to be on the frontier of product innovation and overall growth.”

“We’ll have an opportunity to bring manufacturing upstream to the early stage of research, designing new processes and new devices with scalability in mind,” he says.

Additionally, MIT expects to open new manufacturing-related labs and to further broaden cooperation with industry at existing shared facilities, such as MIT.nano. Hart says that facilities will also invite tighter collaborations with corporations — not just providing advanced equipment, but working jointly on, say, new technologies for weaving textiles, or speeding up battery manufacturing.

Homing in on the United States

INM is a global project that brings a particular focus on the United States, which remains the world’s second-largest manufacturing economy, but has suffered a significant decline in manufacturing employment and innovation.

One key to reversing this trend and reinvigorating the U.S. manufacturing base is advocacy for manufacturing’s critical role in society and the career opportunities it offers.

“No one really disputes the importance of manufacturing,” Hart says. “But we need to elevate interest in manufacturing as a rewarding career, from the production workers to manufacturing engineers and leaders, through advocacy, education programs, and buy-in from industry, government, and academia.”

MIT is in a unique position to convene industry, academic, and government stakeholders in manufacturing to work together on this vital issue, he points out.

Moreover, in times of radical and rapid changes in manufacturing, “we need to focus on deploying new technologies into factories and supply chains,” Hart says. “Technology is not all of the solution, but for the U.S. to expand our manufacturing base, we need to do it with technology as a key enabler, embracing companies of all sizes, including small and medium enterprises.”

“As AI becomes more capable, and automation becomes more flexible and more available, these are key building blocks upon which you can address manufacturing challenges,” he says. “AI and automation offer new accelerated ways to develop, deploy, and monitor production processes, which present a huge opportunity and, in some cases, a necessity.”

“While manufacturing is always a combination of old technology, new technology, established practice, and new ways of thinking, digital technology gives manufacturers an opportunity to leapfrog competitors,” Hart says. “That’s very, very powerful for the U.S. and any company, or country, that aims to create differentiated capabilities.”

Fortunately, in recent years, investors have increasingly bought into new manufacturing in the United States. “They see the opportunity to re-industrialize, to build the factories and production systems of the future,” Hart says.

“That said, building new manufacturing is capital-intensive, and takes time,” he adds. “So that’s another area where it’s important to convene stakeholders and to think about how startups and growth-stage companies build their capital portfolios, how large industry can support an ecosystem of small businesses and young companies, and how to develop talent to support those growing companies.”

All these concerns and opportunities in the manufacturing ecosystem play to MIT’s strengths. “MIT’s DNA of cross-disciplinary collaboration and working with industry can let us create a lot of impact,” Hart emphasizes. “We can understand the practical challenges. We can also explore breakthrough ideas in research and cultivate successful outcomes, all the way to new companies and partnerships. Sometimes those are seen as disparate approaches, but we like to bring them together.”

© Photo: David Sella

John Hart is head of the Department of Mechanical Engineering and faculty co-director of the Initiative for New Manufacturing.

The art and science of being an MIT teaching assistant

“It’s probably the hardest thing I’ve ever done at MIT,” says Haley Nakamura, a second-year MEng student in the MIT Department of Electrical Engineering and Computer Science (EECS). She’s not reflecting on a class, final exam, or research paper. Nakamura is talking about the experience of being a teaching assistant (TA). “It’s really an art form, in that there is no formula for being a good teacher. It’s a skill, and something you have to continuously work at and adapt to different people.”

Nakamura, like approximately 16 percent of her EECS MEng peers, balances her own coursework with teaching responsibilities. The TA role is complex, nuanced, and at MIT, can involve much more planning and logistics than you might imagine. Nakamura works on a central computer science (CS) course, 6.3900 (Introduction to Machine Learning), which registers around 400-500 students per semester. For that enrollment, the course requires eight instructors at the lecturer/professor level; 15 TAs, between the undergraduate and graduate level; and about 50 lab assistants (LAs). Students are split across eight sections corresponding to each senior instructor, with a group of TAs and LAs for each section of 60-70 students.

To keep everyone moving forward at the same pace, coordination and organization are key. “A lot of the reason I got my initial TA-ship was because I was pretty organized,” Nakamura explains. “Everyone here at MIT can be so busy that it can be difficult to be on top of things, and students will be the first to point out logistical confusion and inconsistencies. If they’re worried about some quirk on the website, or wondering how their grades are being calculated, those things can prevent them from focusing on content.” 

Nakamura's organizational skills made her a good candidate to spot and deal with potential wrinkles before they derailed a course section. “When I joined the course, we wanted someone on the TA side to be more specifically responsible for underlying administrative tasks, so I became the first head TA for the course. Since then, we’ve built that role up more and more. There is now a head TA, a head undergraduate TA, and section leads working on internal documentation such as instructions for how to improve content and how to manage office hours.” The result of this administrative work is consistency across sections and semesters.

The other side of a TA-ship is, of course, teaching. “I was eager to engage with students in a meaningful way,” says Soroush Araei, a sixth-year graduate student who had already fulfilled the teaching requirement for his degree in electrical engineering, but who jumped at the chance to teach alongside his PhD advisor. “I enjoy teaching, and have always found that explaining concepts to others deepens my own understanding.” He was recently awarded the ​MIT School of Engineering’s 2025 Graduate Student Teaching and Mentoring Award, which honors “a graduate student in the School of Engineering who has demonstrated extraordinary teaching and mentoring as a teaching or research assistant.” Araei’s dedication comes at the price of sleep. “Juggling my own research with my TA duties was no small feat. I often found myself in the lab for long hours, helping students troubleshoot their circuits. While their design simulations looked perfect, the circuits they implemented on protoboards didn’t always perform as expected. I had to dive deep into the issues alongside the students, which often required considerable time and effort.”

The rewards for Araei’s work are often intrinsic. “Teaching has shown me that there are always deeper layers to understanding. There are concepts I thought I had mastered, but I realized gaps in my own knowledge when trying to explain them,” he says. Another challenge: the variety of background knowledge between students in a single class. “Some had never encountered transistors, while others had tape-out experience. Designing problem sets and selecting questions for office hours required careful planning to keep all students engaged.” For Araei, some of the best moments have come during office hours. “Witnessing the ‘aha’ moment on a student’s face when a complex concept finally clicked was incredibly rewarding.”

The pursuit of the “aha” moment is a common thread between TAs. “I still struggle with the feeling that you’re responsible for someone’s understanding in a given topic, and, if you’re not doing a good job, that could affect that person for the rest of their life,” says Nakamura. “But the flip side of that moment of confusion is when someone has the ‘aha!’ moment as you’re talking to them, when you’re able to explain something that wasn’t conveyed in the other materials. It was your help that broke through and gave understanding. And that reward really overruns the fear of causing confusion.”

Hope Dargan ’21, MEng ’23, a second-year PhD student in EECS, uses her role as a graduate instructor to try to reach students who may not fit into the stereotype of the scientist. She started her career at MIT planning to major in CS and become a software engineer, but a missionary trip to Sweden in 2016-17 (when refugees from the Syrian civil war were resettling in the region) sparked a broader interest in both the Middle East and in how groups of people contextualized their own narratives. When Dargan returned to MIT, she took on a history degree, writing her thesis on the experiences of queer Mormon women. Additionally, she taught for MEET (the Middle East Entrepreneurs of Tomorrow), an educational initiative for Israeli and Palestinian high school students. “I realized I loved teaching, and this experience set me on a trajectory to teaching as a career.” 

Dargan gained her teaching license as an undergrad through the MIT Scheller Teacher Education Program (STEP), then joined the MEng program, in which she designed an educational intervention for students who were struggling in class 6.101 (Fundamentals of Programming). The next step was a PhD. “Teaching is so context-dependent,” says Dargan, who was awarded the Goodwin Medal for her teaching efforts in 2023. “When I taught students for MEET, it was very different from when I was teaching eighth graders at Josiah Quincy Upper School for my teaching license, and very different now when I teach students in 6.101, versus when I teach the LGO [Leaders for Global Operations] students Python in the summers. Each student has their own unique perspective on what’s motivating them, how they learn, and what they connect to … So even if I’ve taught the material for five years (as I have for 6.101, because I was an LA, then a TA, and now an instructor), improving my teaching is always challenging. Getting better at adapting my teaching to the context of the students and their stories, which are ever-evolving, is always interesting.”

Although Dargan considers teaching one of her greatest passions, she is clear-eyed about the cost of the profession. “I think the things that we’re passionate about tell us a lot about ourselves, both our strengths and our weaknesses, and teaching has taught me a lot about my weaknesses,” she says. “Teaching is a tough career, because it tends to take people who care a lot and are perfectionists, and it can lead to a lot of burnout.”

Dargan's students have also expressed enthusiasm and gratitude for her work. “Hope is objectively the most helpful instructor I’ve ever had,” said one anonymous reviewer. Another wrote, “I never felt judged when I asked her questions, and she was great at guiding me through problems by asking motivating questions … I truly felt like she cared about me as a student and person.” Dargan herself is modest about her role, saying, “For me, the trade-off between teaching and research is that teaching has an immediate day-to-day impact, while research has this unknown potential for long-term impact.” 

With the responsibility to instruct an ever-growing percentage of the Institute’s students, the Department of Electrical Engineering and Computer Science relies heavily on dedicated and passionate students like Nakamura, Araei, and Dargan. As their caring and humane influence ripples outward through thousands of new electrical engineers and computer scientists, the day-to-day impact of their work is clear; but the long-term impact may be greater than any of them know.

© Photo: Frankie Schulte

Haley Nakamura, a computer science and engineering major who’s minoring in environmental engineering, is the first-ever head TA for class 6.3900 (Introduction to Machine Learning), which registers 400-500 students per semester. “The organizational structure of the course is almost like a company,” she says. “We’ve made many changes to the management structure, and I hope they continue to pay off.” This year, Nakamura was honored with the MIT Goodwin Medal, which is given annually to a graduate student whose performance of teaching duties is “conspicuously effective over and above ordinary excellence.”

Would you like that coffee with iron?

Around the world, about 2 billion people suffer from iron deficiency, which can lead to anemia, impaired brain development in children, and increased infant mortality.

To combat that problem, MIT researchers have come up with a new way to fortify foods and beverages with iron, using small crystalline particles. These particles, known as metal-organic frameworks, could be sprinkled on food, added to staple foods such as bread, or incorporated into drinks like coffee and tea.

“We’re creating a solution that can be seamlessly added to staple foods across different regions,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “What’s considered a staple in Senegal isn’t the same as in India or the U.S., so our goal was to develop something that doesn’t react with the food itself. That way, we don’t have to reformulate for every context — it can be incorporated into a wide range of foods and beverages without compromise.”

The particles designed in this study can also carry iodine, another critical nutrient. The particles could also be adapted to carry important minerals such as zinc, calcium, or magnesium.

“We are very excited about this new approach and what we believe is a novel application of metal-organic frameworks to potentially advance nutrition, particularly in the developing world,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute.

Jaklenec and Langer are the senior authors of the study, which appears today in the journal Matter. MIT postdoc Xin Yang and Linzixuan (Rhoda) Zhang PhD ’24 are the lead authors of the paper.

Iron stabilization

Food fortification can be a successful way to combat nutrient deficiencies, but this approach is often challenging because many nutrients are fragile and break down during storage or cooking. When iron is added to foods, it can react with other molecules in the food, giving the food a metallic taste.

In previous work, Jaklenec’s lab has shown that encapsulating nutrients in polymers can protect them from breaking down or reacting with other molecules. In a small clinical trial, the researchers found that women who ate bread fortified with encapsulated iron were able to absorb the iron from the food.

However, one drawback to this approach is that the polymer adds a lot of bulk to the material, limiting the amount of iron or other nutrients that end up in the food.

“Encapsulating iron in polymers significantly improves its stability and reactivity, making it easier to add to food,” Jaklenec says. “But to be effective, it requires a substantial amount of polymer. That limits how much iron you can deliver in a typical serving, making it difficult to meet daily nutritional targets through fortified foods alone.”

To overcome that challenge, Yang came up with a new idea: Instead of encapsulating iron in a polymer, they could use iron itself as a building block for a crystalline particle known as a metal-organic framework, or MOF (pronounced “moff”).

MOFs consist of metal atoms joined by organic molecules called ligands to create a rigid, cage-like structure. Depending on the combination of metals and ligands chosen, they can be used for a wide variety of applications.

“We thought maybe we could synthesize a metal-organic framework with food-grade ligands and food-grade micronutrients,” Yang says. “Metal-organic frameworks have very high porosity, so they can load a lot of cargo. That’s why we thought we could leverage this platform to make a new metal-organic framework that could be used in the food industry.”

In this case, the researchers designed a MOF consisting of iron bound to a ligand called fumaric acid, which is often used as a food additive to enhance flavor or help preserve food.

This structure prevents iron from reacting with polyphenols — compounds commonly found in foods such as whole grains and nuts, as well as coffee and tea. When iron does react with those compounds, it forms a metal polyphenol complex that cannot be absorbed by the body.

The MOFs’ structure also allows them to remain stable until they reach an acidic environment, such as the stomach, where they break down and release their iron payload.

Double-fortified salts

The researchers also decided to include iodine in their MOF particle, which they call NuMOF. Iodized salt has been very successful at preventing iodine deficiency, and many efforts are now underway to create “double-fortified salts” that would also contain iron.

Delivering these nutrients together has proven difficult because iron and iodine can react with each other, making each one less likely to be absorbed by the body. In this study, the MIT team showed that once they formed their iron-containing MOF particles, they could load them with iodine, in a way that the iron and iodine do not react with each other.

In tests of the particles’ stability, the researchers found that the NuMOFs could withstand long-term storage, high heat and humidity, and boiling water.

Throughout these tests, the particles maintained their structure. When the researchers then fed the particles to mice, they found that both iron and iodine became available in the bloodstream within several hours of the NuMOF consumption.

The researchers are now working on launching a company that is developing coffee and other beverages fortified with iron and iodine. They also hope to continue working toward a double-fortified salt that could be consumed on its own or incorporated into staple food products.

The research was partially supported by J-WAFS Fellowships for Water and Food Solutions. 

Other authors of the paper include Fangzheng Chen, Wenhao Gao, Zhiling Zheng, Tian Wang, Erika Yan Wang, Behnaz Eshaghi, and Sydney MacDonald.

This research was conducted, in part, using MIT.nano’s facilities.

© Credit: Christine Daniloff, MIT; iStock

MIT researchers have come up with a new way to fortify foods and beverages with iron or iodine, using small crystalline particles. The particles could be sprinkled on food, similar to salt; and added to staple foods such as bread; or incorporated into drinks, including coffee and tea.

Falling ice drives glacial retreat in Greenland

The Greenland ice sheet is melting at an increasing rate, a process accelerated by glacier calving, in which huge chunks of ice break free and crash into the sea, generating large waves that push warmer water to the surface. A new study now shows that this mechanism is amplifying glacial melt.

Researchers uncover surprising limit on human imagination

Tomer D. Ullman,

Tomer D. Ullman.

Niles Singer/Harvard Staff Photographer

Science & Tech

Researchers uncover surprising limit on human imagination

Humans can track a handful of objects visually, but their imaginations can only handle one

Christy DeSmith

Harvard Staff Writer

4 min read

Human beings can juggle up to 10 balls at once. But how many can they move through the air with their imaginations?

The answer, published last month in Nature Communications, astonished even the researchers pursuing the question. The cognitive psychologists found people could easily imagine the trajectory of a single ball after it disappeared. But the imagination couldn’t simultaneously keep tabs on two moving balls that fell from view.

“We set out to test the capacity limits of the imagination, and we found that it was one,” said co-author Tomer D. Ullman, associate professor in the Department of Psychology. “I found this surprising, so I can understand if others do, too.”

Ullman, who heads Harvard’s Computation, Cognition, and Development lab, has a long-time interest in what is known as intuitive physics. Think of the brain conjuring a ball as it rolls downhill, or sounding the alarm over two objects on a sure-fire collision course.

“How do we interact with the physical world around us?” wondered Ullman, who is also affiliated with the Kempner Institute for the Study of Natural and Artificial Intelligence. “I subscribe to the theory that the brain may be running mental simulations, kind of like a video game.”

These couldn’t be perfect simulations of physical environments, right down to the level of atoms and molecules. So Ullman’s lab has worked to understand what kinds of hacks and workarounds make mental simulations possible.

“The human imagination is just really cool, and we find a lot of people are quite interested in how it works,” he offered.

A sizable body of research has explored the capacity limits of human perception, or how many objects the brain can track in a visual scene. “Maybe you’re a parent watching multiple kids, or maybe you’re a lifeguard on duty,” Ullman said. “Obviously you can’t keep track of everything.”

Neuroscientists, psychologists, and computational modelers have found visual tracking is limited to just a handful of moving objects. But few have explored the imagination’s capacity limits.

In the new study, online participants were shown an animation of a bouncing ball, as if on a racquetball court, before it vanished. Others saw two balls ricocheting at completely different cadences before both disappeared. Designing the experiments with Ullman was lead author Halely Balaban, an assistant professor of cognitive psychology at the Open University of Israel.

Also devised were two computational models to explain how the imagination might follow these invisible balls to their moment of impact. The first model posited that multiple objects would be moved in parallel, while the second envisioned independently moving each ball in more of a serial fashion.

Ullman and Balaban found their online recruits were pretty good at predicting when a single invisible ball would have hit the ground. But people fumbled at tracking two.

“It was harder than any of us expected,” said Ullman, noting how reliably the exercise produced laughs.

Based on past findings, the co-authors originally thought the imagination could probably track about three or four objects. There were also intuitive reasons to think the mind’s eye could move multiple objects in parallel.

“If I close my eyes right now, I can see a tower of blocks falling down,” Ullman noted. “It doesn’t feel limited. People feel like they should be able to move more than one.”

In fact, a follow-up experiment found people were slightly better at tracking two balls that moved in tandem before disappearing. But performances still paled next to a yet another follow-up, in which study participants tracked two balls that remained visible until impact.

When it comes to tracking objects that have disappeared, the researchers found, the human imagination relies largely on a serial model, moving each piece one after the other.

A separate follow-up tested whether people might be conserving mental energy by employing a serial model. After all, running a simulation via the parallel model would require more effort. Imagine a computer running multiple simulations at once.

“We offered participants a bunch of money if they could get this right,” Ullman explained. “But that didn’t seem to matter.”

For Ullman, the findings open an exciting frontier. “There has been decades and decades of work on how the mind uses clever tricks to keep track of what’s in front of you,” he said. “But there’s been so little on the tricks and limitations of the mind’s eye. I could imagine a lot more work to do here.”

Revitalising youths and textiles: The story of Repurposed

Matthew Tan, a student at the NUS Faculty of Arts and Social Sciences (FASS), and FASS alumna Desiree Chang launched Woofie’s Warehouse in 2021 based on a hybrid concept that blends thrift and vintage by selling second-hand clothing primarily sourced from Japan.

Purchasing the merchandise in bulk often meant that about 5 to 10 per cent of the clothing was unfit for sale due to being in poor condition or heavily damaged. Faced with unsold inventory and Singapore’s dismal textile recycling rate of just 2 per cent in 2023, the environmentally-conscious pair decided to act. Their solution? Repurposed – a bold initiative that transforms unsellable garments into stylish upcycled pieces, while empowering youths to explore their passion for design and fashion.

Repurposed – An idea borne at the FASS Social Entrepreneurship Incubator Pitch Competition

Repurposed, an initiative that was launched in April, offers young people passionate about sustainable fashion and entrepreneurship hands-on training in sewing, embroidery, and textile reworking. Participants have the opportunity to upcycle Woofie’s unsold merchandise into their own clothing lines, which are then sold in stores. Alongside practical skills, they receive mentorship on the ins and outs of running a thrift business, equipping them with the knowledge to turn their creativity into entrepreneurial ventures.

The idea emerged after Matthew and Desiree banded with like-minded entrepreneurial friends, NUS Faculty of Science students Avaneesh Reddy and Novin Sim, to participate in the FASS Social Entrepreneurship Incubator (SEI) Pitch Competition at the end of last year.

Matthew, a third-year History student, shared how the team initially approached the competition with a broader idea to address Singapore’s textile waste problem, but the judges offered valuable guidance to help refine their pitch.

He recounted, “They advised that given the scale of the textile waste problem in Singapore, it would be more practical to focus on addressing the issue within a specific social group and in the process, create meaningful impact.” Because of this, the team decided to tap into Woofie’s unsold inventory and target their efforts on youths passionate about sustainability but who may not have access to creative reworking opportunities, entrepreneurial exposure or confidence-building platforms.

In addition to teaching youths how to upcycle textiles, the team will aim to leverage Novin’s background in life sciences to explore sustainable dyeing techniques using natural ingredients like coffee grounds for Repurposed. This will add another layer of innovation and eco-consciousness to their initiative. “We’re not just empowering youths to rework clothes,” Novin explained, “We’re equipping them with the skills to think creatively and to innovate.”

The team’s vision eventually earned them first prize in the competition – S$10,000 in seed funding to invest in the training workshops and equipment needed, as well as a one-year mentorship experience with the SEI steering committee to carry out the business idea.

Repurposed’s challenges and early wins

Launching Repurposed came with its fair share of challenges. The team initially struggled to recruit youths in April, as many were hesitant to commit long-term due to concerns about balancing involvement in the programme with schoolwork and exams. Others – such as students from fashion design schools – were drawn more by enrichment opportunities than genuine need.

Undeterred, the team pressed on. Through numerous interviews and extensive outreach, they successfully recruited an inaugural batch of four youths in May. Alongside Desiree, the youths completed a foundational textile reworking workshop at the end of May and have since independently crafted their very first bags, pouches and water bottle-holders.

To further support their learning journey, the team purchased sewing and embroidery machines and set up a dedicated workstation at Woofie’s Eunos back office. There, the youths continued to hone their skills learnt from the foundation workshop to develop more polished pieces, with guidance from the team. This August, they will sell their handcrafted items at a flea market held by social enterprise City Sprouts in Pasir Ris. They will also attend a second advanced workshop at the end of the month to further refine their sewing and embroidery techniques and explore more distinctive patterns and designs with the goal of eventually progressing to developing their own clothing line such as buttoned-up shirts and ladies’ shirts for sale at Woofie’s by year’s end. The team intends to recruit a second group of four to five youths early next year and selected participants will undergo the same training under the team’s mentorship.

Other ways the team plans to support the youths include creating a guide on basic social media marketing and tracking of product sales of each youth. Desiree’s participation in the first foundational workshop and later in the advanced workshop will allow the team to document and digitise the training process, ensuring future cohorts can learn from an online, structured knowledge base when Repurposed continues beyond the seed funding period.

Desiree, a Communications and New Media alumna, shared, “Repurposed is a meaningful initiative and we hope to continue it for as long as our stores are open. Rather than focus on training and recruiting a set number of youths within the seed funding period of one year, the team wants to focus on the longer-term goal of deepening the practical skills of the youths, building their confidence and deepen their appreciation for sustainability.”

One of the participants, Ms Bock Khai Yue, was happy to be a part of the programme. She said, “Through this programme I experienced first-hand the joy of turning an unsellable product into my own creation!”

Avaneesh, a third-year student majoring in Data Science and Economics, reflected on lessons the team has also learnt since the start of the initiative. “When it came to recruitment, we realised that many youths felt hesitant or intimidated to apply, even if they were interested. Some doubted their creativity or feared they would not fit in.” Moving forward, he shared that the team intends to feature more behind-the-scenes content of the reworking process when recruiting the second batch of participants, “not just to showcase what we do, but to make the programme feel more relatable, welcoming and achievable.”

One of the competition judges and a member of the SEI steering committee, Head of NUS Social Work Department Associate Professor Lee Geok Ling, commended the team for the way they had navigated challenges, absorbed feedback and continuously refined their approach towards Repurposed – a testament to both their growth and unwavering dedication.

She remarked, “Their remarkable balance of creative ambition with practical, sustainable considerations exemplifies the desirable mix of idealism and pragmatism that is essential for social entrepreneurs…As they continue to develop, I am excited by the potential, meaningful impact they can make on the youth community, and I am confident that Repurposed will keep thriving, becoming even stronger social entrepreneurs in the years ahead.”

Natural archives in coral skeletons show sea-level rise began accelerating earlier than previously thought: NUS-led study

A groundbreaking international study by marine scientists based in Singapore has revealed that sea-level rise in the Indian Ocean began accelerating far earlier than previously thought, with corals providing an unbroken natural record of ocean change stretching back to the early 20th century.

Published in Nature Communications, the study was led by Professor Paul Kench from the Department of Geography, at the National University of Singapore’s (NUS) Faculty of Arts and Social Sciences, in collaboration with researchers from NUS and Nanyang Technological University (NTU).

By analysing coral samples from the Maldives in the central Indian Ocean, the scientists reconstructed a century-long chronology of sea-level changes and climate shifts with remarkable precision.

They were able to extend the sea-level record for the Indian Ocean back a further 60 years, all the way to the early 1900s, offering a much longer and clearer historical context for interpreting modern sea-level changes.

The study yielded two significant findings.

The first is the pronounced acceleration of rising sea levels in the Indian Ocean from around 1959 – earlier than indicated by coastal tide gauges or satellite observations. The timing aligns closely with global temperature increases and accelerated glacial melt driven by human activity, showing that the Indian Ocean has been highly responsive to climatic changes for over half a century.

The second is that sea levels in the Indian Ocean, which covers approximately 30% of the world’s ocean area and supports around 30% of the global population, have risen significantly by 30cm since the middle of the 20th century.

“What we’re seeing is a clear fingerprint of human-driven climate change etched into the skeletons of corals. The early acceleration in sea-level rise is a warning sign that the ocean has been responding to global warming far earlier and more strongly than we thought,” said Prof Kench.

Accelerating sea-level rise threatens millions living in coastal areas with increased flooding, erosion, saltwater intrusion, and damage to vital ecosystems like mangroves and coral reefs.

The effects of sea-level changes in the Indian Ocean ripples beyond Asia to the rest of the world, underscoring the need for international cooperation to address global challenges such as water security, agriculture, and disaster preparedness.

For Singapore and its Southeast Asian neighbours, the uncovering of long-term patterns of sea-level changes can improve climate models and strengthen the region’s ability to plan for future risks under continued global warming.

For example, the new coral-derived data from the study offers a historical baseline that can enhance the efforts that Singapore already has in place to guard against rising sea levels, such as the Coastal-Inland Flood Model and Climate Impact Science Research Programme, by improving the accuracy of sea-level projections and informing adaptive strategies.

Corals confirmed as trusted recorders of sea-level and climate history

This research sets a new standard for how scientists can use coral to look back in time and understand how our oceans have changed.

As corals grow, they build their skeletons layer by layer similar to how trees form rings. Each layer captures details about the ocean at that time, such as temperature, salt levels, and even sea level.

To make sure data from the coral samples from the Indian Ocean that were analysed was reliable, the team compared it with real sea-level measurements from tide stations and satellites, and found that they matched up closely.

The successful calibration of coral proxies against instrumental sea-level records means that we can trust corals to tell us about past sea-level changes, with their coral growth rates validated to be reliable indicators of relative sea-level changes, making them a powerful tool for climate research.

In addition to tracking long-term sea-level changes, the coral records also captured signs of extreme climate events such as unusual warming and cooling periods, as well as droughts. These signals closely match historical weather records, offering valuable insight into the frequency and severity of past climate extremes.

Prof Kench said, “These findings have far-reaching implications for climate adaptation planning globally, especially for low-lying and densely populated coastal regions. We hope this work contributes meaningfully to the global dialogue on resilience and preparedness.”

This pioneering research deepens our understanding of how oceans respond to climate change, while setting a new standard for using natural archives to reconstruct environmental history. As coastal nations around the world confront the realities of rising seas, the insights uncovered by Singapore’s marine scientists offer both a clearer view of the past and a critical guide for planning a more resilient future.


About the Study

The study titled “Coral growth records 20th Century sea-level acceleration and climatic variability in the Indian Ocean” was published in Nature Communications (link: https://rdcu.be/euiYQ) on 1 July 2025. It represents a milestone in paleoclimate science, combining marine biology, geochemistry, and oceanography to produce the most detailed Indian Ocean sea-level record to date.

The interdisciplinary approach taken for this study, combining marine biology, geochemistry, climatology, and oceanography, demonstrates the power of collaborative science in tackling complex global challenges. It also highlights the role of Singapore’s research institutions in contributing to world-class climate science.

The paper was also recently presented at the Asia Oceania Geosciences Society 2025 conference and picked up by Science Magazine (link: https://scienmag.com/coral-records-reveal-20th-century-sea-level-rise/).

Keeping kids safe in extreme heat

Little boy getting heat relief from water fountain.
Health

Keeping kids safe in extreme heat

Experts outline threats to childhood development, school challenges, play-time risks

Anna Lamb

Harvard Staff Writer

4 min read

With heat waves becoming more intense and frequent across the U.S., experts gathered for a Harvard webinar on how to protect children’s health amid soaring temperatures.

“Extreme heat is really one of the most dangerous but also one of the least recognized threats to healthy development,” said Lindsey Burghardt, chief science officer at Harvard’s Center on the Developing Child, which hosted the talk.

According to Burghardt, extreme heat has been linked to premature birth, low birth weight, disruptions in sleep and learning, and negative effects on mental health.

Outdoor playgrounds can turn into miniature heat islands — areas that increase dangerous heat even more.

Jennifer Vanos

“These outcomes are really important for us to understand,” she said. “Because they have immediate effects in childhood, but they also have the ability to have effects and impacts across children’s lifetimes. This makes intervention just so important.”

The Environmental Protection Agency defines extreme heat days as those in which outside temperatures exceed 95 degrees Fahrenheit. It also encompasses periods in which temperatures fail to drop, even at night. According to EPA statistics, these types of heat waves are becoming more frequent and severe.

Joining Burghardt in the talk was Michelle Kang, chief executive officer for the National Association for the Education of Young Children. She said her team has worked with Harvard’s Graduate School of Education to analyze how the changing climate affects learning and childcare practices across the country.

“Ideally, you’re able to take children out at times to have that gross motor play — that important time where they’re able to get those zoomies out,” she said. “But if you don’t have adequate shade, and it’s hot and getting hotter, then you actually can’t take your children outside. It changes what the learning environment looks like for the day to day and creates more stress on educators to ensure that they have what they need to keep their children safe.”

According to listening sessions with members of her organization, Kang said educators across the country are dealing with vastly different resources to keep kids cool during increasingly hot spring semesters and summer sessions. Many school buildings lack adequate air conditioning, for example, or fail to insulate against the extreme heat.

Another speaker, Jennifer Vanos, associate professor in the School of Sustainability and the College of Global Futures at Arizona State University, added that many schools also lack sufficient indoor play space.

Outdoor playgrounds, said Vanos, can turn into miniature heat islands — areas that increase dangerous heat even more.

“It really comes down to an individual school, and what their environment is like. What are their indoor conditions like? What are their outdoor conditions like? Some schools have really great shaded designs that are still OK to be playing in under slightly hotter weather,” she said. “Some schools can handle it better than others.”

During times of extreme heat, she said, it’s important for parents and educators to monitor play time.

“Kids want to keep playing,” she said. “What can happen if we don’t stop that soon enough is you’ll start to see the rise in the heart rate, because they’re trying to pump blood to the skin to lose heat from the body, and then you’ll also start to see a rise in sweat rate.”

Because kids have fewer sweat glands than adults, they aren’t able to release heat at the same rate, Vanos said. At the point when they’re getting sweaty, their core temperature is dangerously rising.

“If you see that rise above 104 degrees Fahrenheit or so, that’s when you can get into this very high risk of heat stroke occurring, or, if you’re playing, it’s more exertional heat stroke or a heat exhaustion,” she said.

In the direst cases of extreme heat exposure, the body can experience multiple organ failure and need hospitalization. And while it happens quickly, Vanos said, there are signs that indicate it’s time to cool off.

“One child is very different from another child, and we have to know which kids have potential pre-existing factors to account for certain medications, certain illnesses that they might have that make them higher risk than average,” she said. “We have to figure out the intervention points there — what are they, and how can we keep kids safe.”

Why common blood pressure readings may be misleading – and how to fix them

Nurse checking a patient's blood pressure

High blood pressure, or hypertension, is the top risk factor for premature death, associated with heart disease, strokes and heart attacks. However, inaccuracies in the most common form of blood pressure measurement mean that as many as 30% of cases of high blood pressure could be missed.

The researchers, from the University of Cambridge, built an experimental model that explained the physics behind these inaccuracies and provided a better understanding of the mechanics of cuff-based blood pressure readings.

The researchers say that some straightforward changes, which don’t necessarily involve replacing standard cuff-based measurement, could lead to more accurate blood pressure readings and better results for patients. Their results are reported in the journal PNAS Nexus.

Anyone who has ever had their blood pressure taken will be familiar with the cuff-based method. This type of measurement, also known as the auscultatory method, relies on inflating a cuff around the upper arm to the point where it cuts off blood flow to the lower arm, and then a clinician listens for tapping sounds in the arm through a stethoscope while the cuff is slowly deflated.

Blood pressure is inferred from readings taken from a pressure gauge attached to the deflating cuff. Blood pressure is given as two separate numbers: a maximum (systolic) and a minimum (diastolic) pressure. A blood pressure reading of 120/80 is considered ‘ideal’.

“The auscultatory method is the gold standard, but it overestimates diastolic pressure, while systolic pressure is underestimated,” said co-author Kate Bassil from Cambridge’s Department of Engineering. “We have a good understanding of why diastolic pressure is overestimated, but why systolic pressure is underestimated has been a bit of a mystery.”

“Pretty much every clinician knows blood pressure readings are sometimes wrong, but no one could explain why they are being underestimated — there’s a real gap in understanding,” said co-author Professor Anurag Agarwal, also from Cambridge’s Department of Engineering.

Previous non-clinical studies into measurement inaccuracy used rubber tubes that did not fully replicate how arteries collapse under cuff pressure, which masked the underestimation effect.

The researchers built a simplified physical model to isolate and study the effects of downstream blood pressure — the blood pressure in the part of the arm below the cuff. When the cuff is inflated and blood flow to the lower arm is cut off, it creates a very low downstream pressure. By reproducing this condition in their experimental rig, they determined this pressure difference causes the artery to stay closed for longer while the cuff deflates, delaying the reopening and leading to an underestimation of blood pressure.

This physical mechanism — the delayed reopening due to low downstream pressure — is the likely cause of underestimation, a previously unidentified factor. “We are currently not adjusting for this error when diagnosing or prescribing treatments, which has been estimated to lead to as many as 30% of cases of systolic hypertension being missed,” said Bassil.

Instead of the rubber tubes used in earlier physical models of arteries, the Cambridge researchers used tubes that lay flat when deflated and fully close when the cuff pressure is inflated, the key condition for reproducing the low downstream pressure observed in the body.

The researchers say that there are a range of potential solutions to this underestimation, which include raising the arm in advance of measurement, potentially producing a predictable downstream pressure and therefore predictable underestimation. This change doesn’t require new devices, just a modified protocol.

“You might not even need new devices, just changing how the measurement is done could make it more accurate,” said Agarwal.

However, if new devices for monitoring blood pressure are developed, they might ask for additional inputs which correlate with downstream pressure, to adjust what the ‘ideal’ readings might be for each individual. These may include age, BMI, or tissue characteristics.

The researchers hope to secure funding for clinical trials to test their findings in patients, and are looking for industrial or research partners to help refine their calibration models and validate the effect in diverse populations. Collaboration with clinicians will also be essential to implement changes to clinical practice.

The research was supported by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Anurag Agarwal is a Fellow of Emmanuel College, Cambridge. 

Reference:
Kate Bassil and Anurag Agarwal. ‘Underestimation of systolic pressure in cuff-based blood pressure measurement.’ PNAS Nexus (2025). DOI: 10.1093/pnasnexus/pgaf222.

 

Researchers have found why common cuff-based blood pressure readings are inaccurate and how they might be improved, which could improve health outcomes for patients.

Nurse checking a patient's blood pressure

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Jessika Trancik named director of the Sociotechnical Systems Research Center

Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.

Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.

“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.

Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.

A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.

Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.

As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.

“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).

Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.

Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC.

Professor Jessika Trancik conducts research on technology innovation and energy systems.

Harvey Kent Bowen, ceramics scholar and MIT Leaders for Global Operations co-founder, dies at 83

Harvey Kent Bowen PhD ’71, a longtime MIT professor celebrated for his pioneering work in manufacturing education, innovative ceramics research, and generous mentorship, died July 17 in Belmont, Massachusetts. He was 83.

At MIT, he was the founding engineering faculty leader of Leaders for Manufacturing (LFM) — now Leaders for Global Operations (LGO) — a program that continues to shape engineering and management education nearly four decades later.

Bowen spent 22 years on the MIT faculty, returning to his alma mater after earning a PhD in materials science and ceramics processing at the Institute. He held the Ford Professorship of Engineering, with appointments in the departments of Materials Science and Engineering (DMSE) and Electrical Engineering and Computer Science, before transitioning to Harvard Business School, where he bridged the worlds of engineering, manufacturing, and management. 

Bowen’s prodigious research output spans 190 articles, 45 Harvard case studies, and two books. In addition to his scholarly contributions, those who knew him best say his visionary understanding of the connection between management and engineering, coupled with his intellect and warm leadership style, set him apart at a time of rapid growth at MIT.  

A pioneering physical ceramics researcher

Bowen was born on Nov. 21, 1941, in Salt Lake City, Utah. As an MIT graduate student in the 1970s, he helped to redefine the study of ceramics — transforming it into the scientific field now known as physical ceramics, which focuses on the structure, properties, and behavior of ceramic materials.

“Prior to that, it was the art of ceramic composition,” says Michael Cima, the David H. Koch Professor of Engineering in DMSE. “What Kent and a small group of more-senior DMSE faculty were doing was trying to turn that art into science.”

Bowen advanced the field by applying scientific rigor to how ceramic materials were processed. He applied concepts from the developing field of colloid science — the study of particles evenly distributed in another material — to the manufacturing of ceramics, forever changing how such objects were made.

“That sparked a whole new generation of people taking a different look at how ceramic objects are manufactured,” Cima recalls. “It was an opportunity to make a big change. Despite the fact that physical ceramics — composition, crystal structure and so forth — had turned into a science, there still was this big gap: how do you make these things? Kent thought this was the opportunity for science to have an impact on the field of ceramics.”

One of his greatest scholarly accomplishments was “Introduction to Ceramics, 2nd edition,” with David Kingery and Donald Uhlmann, a foundational textbook he helped write early in his career. The book, published in 1976, helped maintain DMSE’s leading position in ceramics research and education.

“Every PhD student in ceramics studied that book, all 1,000 pages, from beginning to end, to prepare for the PhD qualifying exams,” says Yet-Ming Chiang, Kyocera Professor of Ceramics in DMSE. “It covered almost every aspect of the science and engineering of ceramics known at that time. That was why it was both an outstanding teaching text as well as a reference textbook for data.”

In ceramics processing, Bowen was also known for his control of particle size, shape, and size distribution, and how those factors influence sintering, the process of forming solid materials from powders.

Over time, Bowen’s interest in ceramics processing broadened into a larger focus on manufacturing. As such, Bowen was also deeply connected to industry and traveled frequently, especially to Japan, a leader in ceramics manufacturing.

“One time, he came back from Japan and told all of us graduate students that the students there worked so hard they were sleeping in the labs at night — as a way to prod us,” Chiang recalls.

While Bowen’s work in manufacturing began in ceramics, he also became a consultant to major companies, including automakers, and he worked with Lee Iacocca, the Ford executive behind the Mustang. Those experiences also helped spark LFM, which evolved into LGO. Bowen co-founded LFM with former MIT dean of engineering Tom Magnanti.

“I’m still in awe of Kent’s audacity and vision in starting the LFM program. The scale and scope of the program were, even for MIT standards, highly ambitious. Thirty-seven successful years later, we all owe a great sense of gratitude to Kent,” says LGO Executive Director Thomas Roemer, a senior lecturer at the MIT Sloan School of Management.

Bowen as mentor, teacher

Bowen’s scientific leadership was matched by his personal influence. Colleagues recall him as a patient, thoughtful mentor who valued creativity and experimentation.

“He had a lot of patience, and I think students benefited from that patience. He let them go in the directions they wanted to — and then helped them out of the hole when their experiments didn’t work. He was good at that,” Cima says.

His discipline was another hallmark of his character. Chiang was an undergraduate and graduate student when Bowen was a faculty member. He fondly recalls his tendency to get up early, a source of amusement for his 3.01 (Kinetics of Materials) class.

“One time, some students played a joke on him. They got to class before him, set up an electric griddle, and cooked breakfast in the classroom before he arrived,” says Chiang. “When we all arrived, it smelled like breakfast.”

Bowen took a personal interest in Chiang’s career trajectory, arranging for him to spend a summer in Bowen’s lab through the Undergraduate Research Opportunities Program. Funded by the Department of Energy, the project explored magnetohydrodynamics: shooting a high-temperature plasma made from coal fly ash into a magnetic field between ceramic electrodes to generate electricity.

“My job was just to sift the fly ash, but it opened my eyes to energy research,” Chiang recalls.

Later, when Chiang was an assistant professor at MIT, Bowen served on his career development committee. He was both encouraging and pragmatic.

“He pushed me to get things done — to submit and publish papers at a time when I really needed the push,” Chiang says. “After all the happy talk, he would say, ‘OK, by what date are you going to submit these papers?’ And that was what I needed.”

After leaving MIT, Bowen joined Harvard Business School (HBS), where he wrote numerous detailed case studies, including one on A123 Systems, a battery company Chiang co-founded in 2001. 

“He was very supportive of our work to commercialize battery technology, and starting new companies in energy and materials,” Chiang says.

Bowen was also a devoted mentor for LFM/LGO students, even while at HBS. Greg Dibb MBA ’04, SM ’04 recalls that Bowen agreed to oversee his work on the management philosophy known as the Toyota Production System (TPS) — a manufacturing system developed by the Japanese automaker — responding kindly to the young student’s outreach and inspiring him with methodical, real-world advice.

“By some miracle, he agreed and made the time to guide me on my thesis work. In the process, he became a mentor and a lifelong friend,” Dibb says. “He inspired me in his way of working and collaborating. He was a master thinker and listener, and he taught me by example through his Socratic style, asking me simple but difficult questions that required rigor of thought.

“I remember he asked me about my plan to learn about manufacturing and TPS. I came to him enthusiastically with a list of books I planned to read. He responded, ‘Do you think a world expert would read those books?’”   

In trying to answer that question, Dibb realized the best way to learn was to go to the factory floor.

“He had a passion for the continuous improvement of manufacturing and operations, and he taught me how to do it by being an observer and a listener just like him — all the time being inspired by his optimism, faith, and charity toward others.”

Faith was a cornerstone of Bowen’s life outside of academia. He served a mission for The Church of Jesus Christ of Latter-day Saints in the Central Germany Mission and held several leadership roles, including bishop of the Cambridge, Massachusetts Ward, stake president of the Cambridge Stake, mission president of the Tacoma, Washington Mission, and temple president of the Boston, Massachusetts Temple. 

An enthusiastic role model who inspired excellence

During early-morning conversations, Cima learned about Bowen’s growing interest in manufacturing, which would spur what is now LGO. Bowen eventually became recognized as an expert in the Toyota Production System, the company’s operational culture and practice which was a major influence on the LGO program’s curriculum design.

“I got to hear it from him — I was exposed to his early insights,” Cima says. “The fact that he would take the time every morning to talk to me — it was a huge influence.”

Bowen was a natural leader and set an example for others, Cima says.

“What is a leader? A leader is somebody who has the kind of infectious enthusiasm to convince others to work with them. Kent was really good at that,” Cima says. “What’s the way you learn leadership? Well, you’d look at how leaders behave. And really good leaders behave like Kent Bowen.”

MIT Sloan School of Management professor of the practice Zeynep Ton praises Bowen’s people skills and work ethic: “When you combine his belief in people with his ability to think big, something magical happens through the people Kent mentored. He always pushed us to do more,” Ton recalls. “Whenever I shared with Kent my research making an impact on a company, or my teaching making an impact on a student, his response was never just ‘good job.’ His next question was: ‘How can you make a bigger impact? Do you have the resources at MIT to do it? Who else can help you?’” 

A legacy of encouragement and drive

With this drive to do more, Bowen embodied MIT’s ethos, colleagues say.

“Kent Bowen embodies the MIT 'mens et manus' ['mind and hand'] motto professionally and personally as an inveterate experimenter in the lab, in the classroom, as an advisor, and in larger society,” says MIT Sloan senior lecturer Steve Spear. “Kent’s consistency was in creating opportunities to help people become their fullest selves, not only finding expression for their humanity greater than they could have achieved on their own, but greater than they might have even imagined on their own. An extraordinary number of people are directly in his debt because of this personal ethos — and even more have benefited from the ripple effect.”

Gregory Dibb, now a leader in the autonomous vehicle industry, is just one of them.

“Upon hearing of his passing, I immediately felt that I now have even more responsibility to step up and try to fill his shoes in sacrificing and helping others as he did — even if that means helping an unprepared and overwhelmed LGO grad student like me,” Dibb says.

Bowen is survived by his wife, Kathy Jones; his children, Natalie, Jennifer Patraiko, Melissa, Kirsten, and Jonathan; his sister, Kathlene Bowen; and six grandchildren. 

© Photo courtesy of the LGO Program.

Kent Bowen in 2014

Planets without water could still produce certain liquids, a new study finds

Water is essential for life on Earth. So, the liquid must be a requirement for life on other worlds. For decades, scientists’ definition of habitability on other planets has rested on this assumption.

But what makes some planets habitable might have very little to do with water. In fact, an entirely different type of liquid could conceivably support life in worlds where water can barely exist. That’s a possibility that MIT scientists raise in a study appearing this week in the Proceedings of the National Academy of Sciences.

From lab experiments, the researchers found that a type of fluid known as an ionic liquid can readily form from chemical ingredients that are also expected to be found on the surface of some rocky planets and moons. Ionic liquids are salts that exist in liquid form below about 100 degrees Celsius. The team’s experiments showed that a mixture of sulfuric acid and certain nitrogen-containing organic compounds produced such a liquid. On rocky planets, sulfuric acid may be a byproduct of volcanic activity, while nitrogen-containing compounds have been detected on several asteroids and planets in our solar system, suggesting the compounds may be present in other planetary systems.

The scientists propose that, even on planets that are too warm or that have atmospheres are too low-pressure to support liquid water, there could still be pockets of ionic liquid. And where there is liquid, there may be potential for life, though likely not anything that resembles Earth’s water-based beings.

Ionic liquids have extremely low vapor pressure and do not evaporate; they can form and persist at higher temperatures and lower pressures than what liquid water can tolerate. The researchers note that ionic liquid can be a hospitable environment for some biomolecules, such as certain proteins that can remain stable in the fluid.

“We consider water to be required for life because that is what’s needed for Earth life. But if we look at a more general definition, we see that what we need is a liquid in which metabolism for life can take place,” says Rachana Agrawal, who led the study as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Now if we include ionic liquid as a possibility, this can dramatically increase the habitability zone for all rocky worlds.”

The study’s MIT co-authors are Sara Seager, the Class of 1941 Professor of Planetary Sciences in the Department of Earth, Atmospheric and Planetary Sciences and a professor in the departments of Physics and of Aeronautics and Astronautics, along with Iaroslav Iakubivskyi, Weston Buchanan, Ana Glidden, and Jingcheng Huang. Co-authors also include Maxwell Seager of Worcester Polytechnic Institute, William Bains of Cardiff University, and Janusz Petkowski of Wroclaw University of Science and Technology, in Poland.

A liquid leap

The team’s work with ionic liquid grew out of an effort to search for signs of life on Venus, where clouds of sulfuric acid envelope the planet in a noxious haze. Despite its toxicity, Venus’ clouds may contain signs of life — a notion that scientists plan to test with upcoming missions to the planet’s atmosphere.

Agrawal and Seager, who is leading the Morning Star Missions to Venus, were investigating ways to collect and evaporate sulfuric acid. If a mission collects samples from Venus’ clouds, sulfuric acid would have to be evaporated away in order to reveal any residual organic compounds that could then be analyzed for signs of life.

The researchers were using their custom, low-pressure system designed to evaporate away excess sulfuric acid, to test evaporation of a solution of the acid and an organic compound, glycine. They found that in every case, while most of the liquid sulfuric acid evaporated, a stubborn layer of liquid always remained. They soon realized that sulfuric acid was chemically reacting with glycine, resulting in an exchange of hydrogen atoms from the acid to the organic compound. The result was a fluid mixture of salts, or ions, known as an ionic liquid, that persists as a liquid across a wide range of temperatures and pressures.

This accidental finding kickstarted an idea: Could ionic liquid form on planets that are too warm and host atmospheres too thin for water to exist?

“From there, we took the leap of imagination of what this could mean,” Agrawal says. “Sulfuric acid is found on Earth from volcanoes, and organic compounds have been found on asteroids and other planetary bodies. So, this led us to wonder if ionic liquids could potentially form and exist naturally on exoplanets.”

Rocky oases

On Earth, ionic liquids are mainly synthesized for industrial purposes. They do not occur naturally, except for in one specific case, in which the liquid is generated from the mixing of venoms produced by two rival species of ants.

The team set out to investigate what conditions ionic liquid could be naturally produced in, and over what range of temperatures and pressures. In the lab, they mixed sulfuric acid with various nitrogen-containing organic compounds. In previous work, Seager’s team had found that the compounds, some of which can be considered ingredients associated with life, are surprisingly stable in sulfuric acid.

“In high school, you learn that an acid wants to donate a proton,” Seager says. “And oddly enough, we knew from our past work with sulfuric acid (the main component of Venus’ clouds) and nitrogen-containing compounds, that a nitrogen wants to receive a hydrogen. It’s like one person’s trash is another person’s treasure.”

The reaction could produce a bit of ionic liquid if the sulfuric acid and nitrogen-containing organics were in a one-to-one ratio — a ratio that was not a focus of the prior work. For their new study, Seager and Agrawal mixed sulfuric acid with over 30 different nitrogen-containing organic compounds, across a range of temperatures and pressures, then observed whether ionic liquid formed when they evaporated away the sulfuric acid in various vials. They also mixed the ingredients onto basalt rocks, which are known to exist on the surface of many rocky planets.

Three chunks of rock

The team found that the reactions produced ionic liquid at temperatures up to 180 degrees Celsius and at extremely low pressures — much lower than that of the Earth’s atmosphere. Their results suggest that ionic liquid could naturally form on other planets where liquid water cannot exist, under the right conditions.

“We were just astonished that the ionic liquid forms under so many different conditions,” Seager says. “If you put the sulfuric acid and the organic on a rock, the excess sulfuric acid seeps into the rock pores, but you’re still left with a drop of ionic liquid on the rock. Whatever we tried, ionic liquid still formed.”

“We’re envisioning a planet warmer than Earth, that doesn’t have water, and at some point in its past or currently, it has to have had sulfuric acid, formed from volcanic outgassing,” Seager says. “This sulfuric acid has to flow over a little pocket of organics. And organic deposits are extremely common in the solar system.”

Then, she says, the resulting pockets of liquid could stay on the planet’s surface, potentially for years or millenia, where they could theoretically serve as small oases for simple forms of ionic-liquid-based life. Going forward, Seager’s team plans to investigate further, to see what biomolecules, and ingredients for life, might survive, and thrive, in ionic liquid.

“We just opened up a Pandora’s box of new research,” Seager says. “It’s been a real journey.”

This research was supported, in part, by the Sloan Foundation and the Volkswagen Foundation.

© Credit: Jose-Luis Olivares, MIT

“We consider water to be required for life because that is what’s needed for Earth life. But if we look at a more general definition, we see that what we need is a liquid in which metabolism for life can take place,” says Rachana Agrawal.

Possible clue into movement disorders like Parkinson’s, others

Science & Tech

Possible clue into movement disorders like Parkinson’s, others

Kiah Hardcastle

Kiah Hardcastle.

Stephanie Mitchell/Harvard Staff Photographer

Kermit Pattison

Harvard Staff Writer 

4 min read

Rodent study suggests different signaling ‘languages’ in parts of brain for learned skills, natural behaviors

Among the many wonders of the brain is its ability to master movements through practice — a dance step, piano sonata, or tying our shoes.

For decades, neuroscientists have known that these tasks require a cluster of brain areas known as the basal ganglia.

According to a new study in Nature Neuroscience led by Harvard researchers, this so-called “learning machine” speaks in two different codes — one for recently acquired learned movements and another for innate “natural” behaviors.

These surprising findings with lab animals may shed light on human movement disorders such as Parkinson’s disease.

“When we compared the codes across these two behavioral domains, we found that they were very different,” said Bence Ölveczky, professor of organismic and evolutionary biology (OEB). “They had nothing to do with each other. They were both faithfully reflecting the animal’s movements, but the language was profoundly different.”

“When we compared the codes across these two behavioral domains, we found that they were very different.”

Bence P. Ölveczky

Located in the midbrain below the cerebral cortex, the basal ganglia are involved in reward, emotion, and motor control. This region also is the site of some of our most concerning movement disorders: Huntington’s disease, Tourette’s syndrome, and Parkinson’s all arise from different defects of the basal ganglia.

Although it has long been known that the basal ganglia play a central role in motor control among mammals, it remains unclear whether this part of the brain directs all movements or just those for specialized tasks.

Some researchers posit that the basal ganglia act as a learning locus for movements acquired through practice, but not other routine behaviors. Other scholars argue that it plays a role in all movements.

To shed light on this mystery, the researchers scrutinized one particular part of the basal ganglia in rats — the dorsolateral striatum (DLS), which plays a role in learned behaviors.

The team studied rats during two different activities: free exploration and a learned task in which they were trained to press a lever twice within a specific time interval to obtain a reward. To track their movements, the team used a system of six cameras around the enclosure plus a software system that categorized behaviors.

In earlier studies, the team removed the DLS of rats, who afterward showed no differences in free exploration, demonstrating that it played no role in natural behaviors such as walking or grooming.

But the same animals were profoundly impaired when performing learned tasks, revealing that the DLS was essential for the newly acquired skills.

“There was a massive change, like night and day,” said Kiah Hardcastle, a postdoctoral fellow in the Ölveczky lab and lead author of the new study. “The animal could do a task super well, performing a stereotyped movement repeatedly, like 30,000 times. Then you lesion the DLS, and they never do that movement again.”

In the new study, the investigators sought to understand the neural activity during these behaviors, implanting tiny electrodes into the brains of rats and recording the electrical firing of neurons as they engaged in free exploration and the learned task.

To their surprise, they discovered the basal ganglia used two distinct “kinematic codes” — or patterns of neuronal electrical activity — during the learned task and natural movements.

“It’s as if the basal ganglia ‘speak’ different languages when the animal performs learned versus innate movements,” said Ölveczky. “Brain areas downstream that control movement only know one of these languages — the one spoken during learned behaviors.”

“It’s as if the basal ganglia ‘speak’ different languages when the animal performs learned versus innate movements.”

Bence P. Ölveczky

The researchers concluded in the paper that the basal ganglia switch back and forth “between being an essential actor and a mere observer.”

Hardcastle speculated that the basal ganglia may be unable to completely turn off electrical signaling when not directing behavior, so it shifts to a harmless “null code.”

Ölveczky said the findings may well be informative about humans because the structures below the cerebral cortex are believed to have remained largely conserved through evolutionary time. He believes the study demonstrates that the basal ganglia play essential roles in learned movements — but not necessarily in routine motor control.

He also thinks the findings offer hints about what may go wrong in some human movement disorders.

“Our research suggests that the pathology associated with Parkinson’s can be understood as the diseased basal ganglia speaking gibberish, but in a very loud and forceful way,” said Ölveczky. “Thus, it inserts itself, in a nonsensical way, into behaviors it would otherwise not control.”


Federal funding for the research was provided by the National Institutes of Health.

‘Turning information into something physical’

Science & Tech

‘Turning information into something physical’

Photo illustration by Liz Zonarich/Harvard Staff

Anna Lamb

Harvard Staff Writer

4 min read

Houghton exhibit looks at how punched cards — invented 300 years ago to streamline weaving — led to modern computing

The punched card, a paper instrument invented 300 years ago to automate looms, helped create a technology that most of us today can’t live without: computers.

A new Houghton Library exhibition — “The Punched Card from the Industrial Revolution to the Information Age” — on view in the library’s lobby through the end of the summer, traces the technology’s history through three works: a book from 1886 woven entirely with a punched card loom; the writing of mathematician Ada Lovelace on the punched card’s computer capabilities; and a 1940s manual on using a punched card computer.

“Computers now permeate almost every aspect of our society,” said the exhibition’s curator, John Overholt. “It’s interesting to learn more about the roots of things that feel very commonplace and widespread these days … to learn how those things evolved over time can provide new insights.”

Punched cards, or punch cards as they are often called, are thought to have originated in 1725, when French silk weaver Basile Bouchon invented the use of a paper tape with punched holes to automate the work of a loom. But perhaps the best-known early example comes from French inventor Joseph Marie Jacquard, who in the early 19th century used a series of punched cards to create intricate brocade patterns. Each card had holes threaded to create a single row of design.

Historians think the first time the technology was used for data collection and analyzing was in the late 1880s, when American engineer Herman Hollerith created punched cards for gathering statistical information for the U.S. Census.

“That’s the thing computer historians are most likely to fight about — what the cutoff is,” said Marc Aidinoff, who teaches the history of technology at Harvard. “You get some people who say, ‘Well actually, programming a loom is not that different from computing. It’s putting in directions.’”

Aidinoff added that there is one thing that all tech historians can agree on: “There is no computing without punch cards. When you think of what a semiconductor is doing, it’s really a very similar system to a punch card, just at a vastly more complex scale.”

The earliest use of punched cards to process Census data drastically sped up the time to count results, marking a milestone on the path to modern computing. Hollerith’s company — which started as the Tabulating Machine Company based out of Washington, D.C. — would go on to become computer giant IBM.

At Harvard, graduate student Howard Aiken designed the Mark I in 1937 — a first-of-its-kind computer able to make a wide array of calculations using punched paper tape.

Aiken partnered with IBM engineers to develop the machine, and after five years Mark I was delivered to Harvard, where it was operated by the U.S. Navy Bureau of Ships for military purposes over the next decade.

Punched card computing continued throughout the next several decades — improving alongside evolving microprocessing and memory capabilities.

Overholt, curator of Early Books and Manuscripts at Houghton, remembers the discarded punched cards his mom would bring home from her job at IBM throughout the 1960s.

Exhibit curator John Overholt used to play with the discarded punched cards his mom would bring home from her job at IBM throughout the 1960s.

Photo courtesy of John Overholt

“She would bring home punch cards that had been used to program computers for us to play with and build little card houses out of,” he said.

Today, Harvard is home to supercomputers that make punch card computers look like an abacus. But at Houghton, you can see the seeds of innovation that started it all.

“Present-day computer technology has moved in new directions but encoding in ones and zeros and bits and bytes is still pretty fundamental to the way computers work,” Overholt said. “It’s hard for me to put myself in the position of what somebody 300 years ago would have imagined about computers, but I’m sure it was clear right away that it was a very powerful tool for turning information into something physical.”


The Punched Card from the Industrial Revolution to the Information Age” will be on view in the Houghton Library lobby through the end of summer. Mark I is on display in the East Atrium of Harvard’s Science and Engineering Complex in Allston.

Surprisingly diverse innovations led to dramatically cheaper solar panels

The cost of solar panels has dropped by more than 99 percent since the 1970s, enabling widespread adoption of photovoltaic systems that convert sunlight into electricity.

A new MIT study drills down on specific innovations that enabled such dramatic cost reductions, revealing that technical advances across a web of diverse research efforts and industries played a pivotal role.

The findings could help renewable energy companies make more effective R&D investment decisions and aid policymakers in identifying areas to prioritize to spur growth in manufacturing and deployment.

The researchers’ modeling approach shows that key innovations often originated outside the solar sector, including advances in semiconductor fabrication, metallurgy, glass manufacturing, oil and gas drilling, construction processes, and even legal domains.

“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions. A lot of knowledge was drawn from different domains and industries, and this network of knowledge is what makes these technologies improve,” says study senior author Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society.

Trancik is joined on the paper by co-lead authors Goksin Kavlak, a former IDSS graduate student and postdoc who is now a senior energy associate at the Brattle Group; Magdalena Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at Johns Hopkins University; former MIT postdoc Ajinkya Kamat; as well as Brittany Smith and Robert Margolis of the National Renewable Energy Laboratory. The research appears today in PLOS ONE.

Identifying innovations

This work builds on mathematical models that the researchers previously developed that tease out the effects of engineering technologies on the cost of photovoltaic (PV) modules and systems.

In this study, the researchers aimed to dig even deeper into the scientific advances that drove those cost declines.

They combined their quantitative cost model with a detailed, qualitative analysis of innovations that affected the costs of PV system materials, manufacturing steps, and deployment processes.

“Our quantitative cost model guided the qualitative analysis, allowing us to look closely at innovations in areas that are hard to measure due to a lack of quantitative data,” Kavlak says.

Building on earlier work identifying key cost drivers — such as the number of solar cells per module, wiring efficiency, and silicon wafer area — the researchers conducted a structured scan of the literature for innovations likely to affect these drivers. Next, they grouped these innovations to identify patterns, revealing clusters that reduced costs by improving materials or prefabricating components to streamline manufacturing and installation. Finally, the team tracked industry origins and timing for each innovation, and consulted domain experts to zero in on the most significant innovations.

All told, they identified 81 unique innovations that affected PV system costs since 1970, from improvements in antireflective coated glass to the implementation of fully online permitting interfaces.

“With innovations, you can always go to a deeper level, down to things like raw materials processing techniques, so it was challenging to know when to stop. Having that quantitative model to ground our qualitative analysis really helped,” Trancik says.

They chose to separate PV module costs from so-called balance-of-system (BOS) costs, which cover things like mounting systems, inverters, and wiring.

PV modules, which are wired together to form solar panels, are mass-produced and can be exported, while many BOS components are designed, built, and sold at the local level.

“By examining innovations both at the BOS level and within the modules, we identify the different types of innovations that have emerged in these two parts of PV technology,” Kavlak says.

BOS costs depend more on soft technologies, nonphysical elements such as permitting procedures, which have contributed significantly less to PV’s past cost improvement compared to hardware innovations.

“Often, it comes down to delays. Time is money, and if you have delays on construction sites and unpredictable processes, that affects these balance-of-system costs,” Trancik says.

Innovations such as automated permitting software, which flags code-compliant systems for fast-track approval, show promise. Though not yet quantified in this study, the team’s framework could support future analysis of their economic impact and similar innovations that streamline deployment processes.

Interconnected industries

The researchers found that innovations from the semiconductor, electronics, metallurgy, and petroleum industries played a major role in reducing both PV and BOS costs, but BOS costs were also impacted by innovations in software engineering and electric utilities.

Noninnovation factors, like efficiency gains from bulk purchasing and the accumulation of knowledge in the solar power industry, also reduced some cost variables.

In addition, while most PV panel innovations originated in research organizations or industry, many BOS innovations were developed by city governments, U.S. states, or professional associations.

“I knew there was a lot going on with this technology, but the diversity of all these fields and how closely linked they are, and the fact that we can clearly see that network through this analysis, was interesting,” Trancik says.

“PV was very well-positioned to absorb innovations from other industries — thanks to the right timing, physical compatibility, and supportive policies to adapt innovations for PV applications,” Klemun adds.

The analysis also reveals the role greater computing power could play in reducing BOS costs through advances like automated engineering review systems and remote site assessment software.

“In terms of knowledge spillovers, what we've seen so far in PV may really just be the beginning,” Klemun says, pointing to the expanding role of robotics and AI-driven digital tools in driving future cost reductions and quality improvements.

In addition to their qualitative analysis, the researchers demonstrated how this methodology could be used to estimate the quantitative impact of a particular innovation if one has the numerical data to plug into the cost equation.

For instance, using information about material prices and manufacturing procedures, they estimate that wire sawing, a technique which was introduced in the 1980s, led to an overall PV system cost decrease of $5 per watt by reducing silicon losses and increasing throughput during fabrication.

“Through this retrospective analysis, you learn something valuable for future strategy because you can see what worked and what didn’t work, and the models can also be applied prospectively. It is also useful to know what adjacent sectors may help support improvement in a particular technology,” Trancik says.

Moving forward, the researchers plan to apply this methodology to a wide range of technologies, including other renewable energy systems. They also want to further study soft technology to identify innovations or processes that could accelerate cost reductions.

“Although the process of technological innovation may seem like a black box, we’ve shown that you can study it just like any other phenomena,” Trancik says.

This research is funded, in part, by the U.S. Department of Energy Solar Energy Technologies Office.

© Image: MIT News; iStock

“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions,” says Jessika Trancik.

Carving a place in outer space for the humanities

Arts & Culture

Carving a place in outer space for the humanities

Jennifer L. Roberts

Jennifer L. Roberts.

Stephanie Mitchell/Harvard Staff Photographer

Eileen O’Grady

Harvard Staff Writer

6 min read

The cosmos ‘is as weird and astonishing as any great work of art,’ argues Jennifer Roberts, and navigating it requires ‘a new kind of ethics’

Jennifer Roberts is an art historian whose work orbits an unexpected subject: outer space. Fascinated by images that are created as a way of understanding the unknown, she builds alliances between scientists and humanists — work she finds even more urgent as we enter an age of commercial space travel.

“Astronomers and art scholars should be working together whenever we can,” said Roberts, X.D. and Nancy Yang Professor of Arts and Sciences and Drew Gilpin Faust Professor of the Humanities. “We both know that images are not just illustrations; they are tools for understanding and interpretation, and they have a powerful role in shaping what humanity will do with the revelations about the universe that science is delivering.”

Roberts will publish a study later this year on the first image transmitted from Mars, paradoxically drawn in pastel on paper. In 1965, the 21 images captured by the Mariner 4 probe in its flyby of Mars were being transmitted too slowly for scientists at the Pasadena Jet Propulsion Lab: Each took eight hours to process. Desperate for the first glimpse of the then-mysterious planet, they bought a box of Rembrandt soft pastels from a nearby art store, pinned the incoming numerical data to a wall, and colored by number each pixel, using a color-code system with brown representing the darkest sections of the image and yellow the brightest.

“This is a really interesting story to me because it indicates one of the many ways in which scientists rely upon visualization,” said Roberts. “They needed to create an image in order to understand and interpret the data. And it’s not irrelevant that they used the fugitive, dusty medium of pastel to do it — artists have long used pastel as a visual technology for perceiving hidden or transient realities.”

A real-time data translator machine converted Mariner 4 digital image data into numbers printed on strips of paper. The team colored in the strips by hand with pastels, making this both a work of art and the first digital image from space.

NASA/JPL-Caltech

Roberts, who attributes her interest in science and the humanities to watching Carl Sagan’s “Cosmos: A Personal Voyage” on PBS as a child, is also currently working on a book about the Voyager Golden Record, which she calls the “most distant work of art ever created.” “The Heartbeat at the Edge of the Solar System: Science, Emotion, and the Golden Record,” a collaboration with artist and writer Dario Robleto, will be published by Scribner in 2026.

Her other research interests include the astronomical photographic glass plate collection at the Harvard College Observatory, and contemporary artists like Anna Von Mertens and Clarissa Tossin, who are incorporating outer space data into their work.

Images of space determine how we think about it, Roberts explained, especially the images typically published by NASA such as those taken by the Webb and Hubble telescopes, which are not raw snapshots but carefully constructed visuals made from data that is often captured beyond the visible spectrum. The images are colored, cropped, rotated, and edited to help viewers make sense of something fundamentally unfamiliar, she said. These aesthetic choices are necessary to make the images visible at all, but they can shift how we perceive outer space, often making it feel closer and more comprehensible than it really is.

Roberts pointed to research by Stanford scholar Elizabeth Kessler, who found that Hubble visualization scientists often styled space imagery to resemble 19th-century paintings of the American West — incidentally framing the cosmos as something desirable, traversable, and ripe for exploration.

Roberts says she admires the expertise and imagination that went into these images. “But there are so many other ways to render the same data, and it’s important that people understand that,” she said. “You could have taken the famous ‘Cosmic Cliffs’ image in which a nebula is cropped to look like a rock face and turned it upside down, and it would have been equally scientifically valid. You could have used any number of other colors. It could have been made to look much, much stranger.”

She worries about this when it comes to commercial space ventures depicting outer space as “there for the taking.” Its narrative, she feels, is all too similar to Earth’s most destructive colonial pursuits.

“We’re about to step off the planet and I’m worried that we’re going to repeat all the same mistakes that we’ve made before,” Roberts said. “We are talking about space as a ‘frontier,’ as something to be colonized or occupied. But we should be listening to what the science tells us: Space is as weird and astonishing as any great work of art. It does not support the status quo.”

This is one reason Roberts believes humanists need a stronger presence in conversations about outer space. She’s noticed a tendency for some humanities scholars to dismiss space as escapist or eccentric, and a distraction from Earth’s real problems, but she disagrees.

“We’ve ceded the heavens, in some extent, to the tech sector, to scientists, to commercial ventures,” Roberts said. “It doesn’t seem to be a place where we can exercise our skills. But while we haven’t been paying attention, we have come to the brink of a new space age that is now upon us. Our move into space is going to require a totally new kind of ethics and a totally new philosophy and we aren’t going to be able to access that if we don’t have the arts and humanities involved in close collaboration with scientists.”

“Our move into space is going to require a totally new kind of ethics and a totally new philosophy and we aren’t going to be able to access that if we don’t have the arts and humanities involved in close collaboration with scientists.”

Jennifer L. Roberts.

To put this idea into action, Roberts has begun teaching “Art and Science of the Moon” in the Department of History of Art and Architecture. The experimental seminar focuses on the world history of artistic engagement with the moon, including the response of photographers and conceptual artists to the Apollo program in the 1960s and ’70s. She hopes to teach a similar seminar on Mars.

She’s also starting a seminar at the Mahindra Humanities Center seminar this fall titled “Celestial Spheres,” which will bring scientists and humanists together to talk about what’s happening outside planet Earth.

Roberts wants to think about outer space as something more like an ocean in which we are immersed than a void filled with image targets.

“What would it mean if we didn’t think about it as a frontier that we had to cross and conquer?” Roberts said. “What if we thought about it as an ecosystem, something that we are already part of?”

‘Hopeful message’ on brain disease

Sanjula Singh.

Veasey Conway/Harvard Staff Photographer

Health

‘Hopeful message’ on brain disease

Researcher Sanjula Singh has looked at stroke, dementia, late-life depression for years, finds lifestyle changes make big difference

Jacob Sweet

Harvard Staff Writer

7 min read

Sanjula Singh wants people to know that stroke, dementia, and depression are much more preventable than they might think.

“The most common misconception that a lot of people have is that Alzheimer’s or depression or stroke is like a train coming down the tracks,” said Singh, a principal investigator at Massachusetts General Hospital and Harvard Medical School’s Brain Care Labs who has been studying brain disease for years.

Though genetics plays a factor in developing these illnesses, Singh’s research has helped show that up to 80 percent of strokes, 45 percent of all instances of dementia, and 35 percent of late-life depression can be addressed through behavioral changes.

One of the most potent risk factors of dementia, Singh explained, is high blood pressure. Instead of focusing on treating diseases, Singh has aimed to help people avoid them in the first place.

“I think what I communicate is a very hopeful message,” she said. “There’s so much you have in your own hands that you can do to remain healthy and happy. … It’s so simple, but I think that’s what makes it so powerful.”

Singh, born to a family of doctors in the Netherlands, originally planned to be a singer-songwriter upon graduating from high school. But after studying at the Codarts Conservatory in Rotterdam, she felt drawn back toward science.

“I loved the creative process, and I also loved solving complex problems. I realized I didn’t have to choose — I wanted a life that made room for both.”

“There’s so much you have in your own hands that you can do to remain healthy and happy. … It’s so simple, but I think that’s what makes it so powerful.”

After traveling around the world, she began at medical school the next year, at first aiming to become a neurosurgeon.

Hoping to make an impression on Bart Brouwers, a neurosurgeon who she thought might have room in his lab, she spent a full night during her first year of medical school trying to memorize his dissertation. He referred her to Gabriël Rinkel, a professor of neurology at University Medical Centre Utrecht. Though Singh hadn’t yet taken a course on the brain, Rinkel said she could start working on research that would later become her Ph.D. thesis.

As she navigated medical school in Utrecht, it was the research side that fascinated her most. She spent much of her neurosurgery Ph.D. studying the cerebellar intracerebral hemorrhage, a deadly subtype of stroke in the cerebellum.

The results of her research would eventually result in changes to international treatment guidance for the disease. Though that was a rewarding outcome, she also had the realization that, while her work could help a small group of people, it wouldn’t stop strokes from happening.

“I wanted to be on the forefront,” she said. “I wanted to prevent the suffering.”

She got her first major exposure to some of these modifiable risk factors in grad school while working in the lab of Josh Goldstein, professor of emergency medicine at Harvard Medical School and a co-supervisor along with Rinkel. Her work covered specific neurosurgical topics, but she started seeing how influential modifiable risk factors were — even for those who had already suffered a stroke.

To learn how to develop questions and conduct analytical research about these risk factors, she took a year to study epidemiology and statistics during a master’s degree at the University of Oxford.

“I came across so many great datasets in which I just saw how much brain disease could be prevented,” she said, “but I wasn’t sure who was truly leading that work.”

80% —Of strokes are attributable to modifiable risk factors, according to Singh’s research

Singh hadn’t planned to return to the U.S. after completing her Ph.D., until Jonathan Rosand, a Harvard professor of neurology and member of her Ph.D. dissertation committee, changed her mind.

During a walk-and-talk, Rosand shared his vision for a new lab focused on preventing brain disease, which would go on to take the name Brain Care Labs.

“I believed in him — and in what he was building,” Singh said. “I told him, ‘I want you to be my mentor. Wherever you go, I’ll follow.’”

At the Brain Care Labs, Singh began spending her time exploring brain health and the factors controlling it. In 2022, she was the lead author of “Brain health begins with brain care,” an article in The Lancet that called for a rapid response to what major health organizations have called a global brain-health crisis.

“Although prevention of brain disease is yet to be a focus of primary care medicine,” she wrote, “a crucial opportunity exists to leverage the global acceptance that more than 40 percent of dementia, stroke, and depression cases are attributable to modifiable risk factors.”

With her new colleagues, Singh helped develop the Brain Care Score, a tool for people to gauge how their habits affect their brain health, backed by data collected from hundreds of thousands of adults followed for more than a decade.

Instead of simply predicting disease, the score was designed to help people modify risk factors that can increase the chance of stroke, dementia, and depression. Those risk factors span three domains: physical (e.g., blood pressure, blood sugar, cholesterol), lifestyle (diet, exercise, sleep), and social-emotional (stress, relationships, purpose in life). 

“It doesn’t matter where you’re starting. What matters is that you begin. Improving — even just a little — is the way forward.”

Singh continues to build upon her research to strengthen the scientific link between modifiable risk factors and brain diseases.

Recently, she and her team identified 17 overlapping factors that affect one’s risk of stroke, dementia, and late-life depression. By knowing and adjusting even one of these factors, people can reduce their risk of suffering from brain diseases long thought to be intractable.

“Start with something small and doable,” Singh said. “Those first steps can create momentum — and over time, they can lead to powerful change.”

As Singh figures out what causes brain diseases, she’s also working on helping people adjust their lifestyles. “We know behavior change is really hard,” she said, “and, amongst other things, we know that individual health coaching can actually work.”

She and her labmates are approaching implementation from a few levels. Through the Global Brain Care Coalition Rosand founded in 2024, Singh and her colleagues have developed community-specific Brain Care Scores to make sure adjustments to factors such as diet are relevant and applicable to different cultural groups across the world.

They’ve also recently applied for a grant for an AI Avatar that can help coach people toward small changes in their daily life.

They’re building physical tools, too, like a product to improve medication adherence that they’re now testing in a clinical trial. Singh imagines creating a whole suite of products that can help people manage their health in easy, accessible ways. She wants to make products that can blend right into a living room — unobtrusive ways for people to improve their health.

This impulse has brought her back to school again — this time to an M.B.A. program at Columbia, where she’s trying to turn her ideas into products.

“I want to make sure people have easy tools that can be integrated into their households that are fun, that are artsy, and that will actually have impact.”

Singh believes brain health deserves the same level of awareness and action as heart health.

“The major papers are out,” she said. “We’re getting the signs out there.” The real impact, she knows, will come when people incorporate the research into their lives. “It doesn’t matter where you’re starting,” she said. “What matters is that you begin. Improving — even just a little — is the way forward.”

Better public service with data

Davi Augusto Oliveira Pinto’s career in Brazil’s foreign service took him all over the world. His work as a diplomat for more than two decades exposed him to the realities of life for all kinds of people, which informed his interest in economics and public policy. 

Oliveira Pinto is now focused on strengthening his diplomatic work through his MIT education. He completed the MITx MicroMasters program in Data, Economics, and Design of Policy (DEDP), which is jointly administered by MIT Open Learning and the Abdul Latif Jameel Poverty Action Lab (J-PAL), and then applied and was accepted to the DEDP master’s program within MIT’s Department of Economics

“I think governments should be able to provide data-driven, research-supported services to their constituents,” he says. “Returning to my role as a diplomat, I hope to use the tools I acquired in the DEDP program to enhance my contributions as a public servant.”

Oliveira Pinto was one of Brazil’s representatives to the World Trade Organization (WTO), helped Brazilian citizens and companies abroad, and worked to improve relationships with governments in South Africa, Argentina, Italy, Spain, and Uruguay. He observed firsthand how economic disparities could influence laws and lives. He believes in a nonpartisan approach to public service, producing and sharing policy based on peer-reviewed data and research that can help as many people as possible. 

“We need public policy informed by evidence and science, rather than by politics and ideology,” he says. “My experience at MIT reinforced my conviction that diplomacy should be used to gather people from different backgrounds and develop joint solutions to our collective challenges.”

As someone responsible for dealing with international trade issues and who understands the potential negative, far-reaching impacts of poorly researched and instituted policies, Oliveira Pinto saw MIT and its world-class economics programs as potentially world-altering tools to help him advance his work. 

Advocacy and economics

Growing up in Minas Gerais, Brazil, Oliveira Pinto learned about the country’s past of economic cycles driven by exporting commodities like minerals and coffee. He also witnessed what he described as Brazil’s “eternal state of development,” one in which broad swaths of the population suffered, and very soon became aware of the impact that issues like inflation and unemployment had on the country. 

“I thought studying economics could help solve issues I observed when growing up,” he says.

Oliveira Pinto earned an undergraduate degree in economics from Universidade Federal de Minas Gerais and a master’s degree in public policy from Escola Nacional de Administração Pública.

Oliveira Pinto’s personal experiences and his commitment to understanding and improving the lives of his fellow Brazilians led him to enroll in the Instituto Rio Branco, Brazil’s diplomatic academy, where he was trained in a variety of disciplines. “I was drawn to investigate inequality between countries, which led to my diplomatic career,” he says. “I worked to help Brazilian migrants abroad, promoted Brazilian companies’ exports, represented Brazil at the WTO, and helped pandemic-era assistance efforts for people in Brazil’s poor border towns.”

During the pandemic, Oliveira Pinto found himself drawn to the DEDP MicroMasters program. He was able to review foundational economics concepts, improve his ability to synthesize and interpret data, and refine his analytical skills. “My favorite course, Data Analysis for Social Scientists, reinforced the critical importance of interpreting data correctly in a world where information is increasingly abundant,” he recalls. 

The online program also offered an opportunity for him to apply to study in person. Now at MIT, Oliveira Pinto is finishing his degree with a capstone project focused on how J-PAL works with governments to support the scaling of evidence-informed policies.  

J-PAL’s research center and network have built long-term partnerships with government agencies around the world to generate evidence from randomized evaluations and incorporate the findings into policy decisions. They work closely with policymakers to inform anti-poverty programs to improve their effectiveness, an area of particular interest to the Brazilian diplomat. 

“I’m trying to understand how J-PAL’s partnerships in these places are working, any lessons we can learn from successes, challenges faced, and how we can most effectively scale the successful programs,” he says.

Inside and beyond MIT

Oliveira Pinto was welcomed into a thriving, diverse community in Cambridge, a journey that was both edifying and challenging. “My family and I found a home,” he notes, observing that many Brazilians live in the area, “and it’s sobering to see so many people from my country working hard to build their lives in the U.S.”

Oliveira Pinto says working closely with members of the MIT community was one of the DEDP master’s program’s big draws. “The ability to forge connections with students and faculty while learning from Nobel laureates and accomplished researchers and practitioners is amazing,” he says. Collaborating with people from a variety of professional, experiential, and backgrounds, he notes, was especially satisfying. 

Oliveira Pinto offered special praise for MIT’s support for his family, describing it as “particularly rewarding.” “MIT offers so many different activities for families,” he says. “My wife and three daughters benefited from the support the Institute provides.” While taking advantage of his time in the States to visit Canada and Washington, D.C., they also made the most of their time in Cambridge. The family enjoyed sailing, swimming, yoga, sports, pottery, lectures, and more while Davi pursued his studies. “The facilities are awesome,” he continues.

Assessing and quantifying impact

Oliveira Pinto’s investigations have yielded some fascinating findings. “Data can be misused,” he notes. “I learned how easily data can tell all kinds of stories, so it’s important to be careful and rigorous when assessing different claims.” He recalls how, during an econometrics class, he learned about parties on opposite sides of a health insurance divide pursuing radically different ends using the same data, each side promoting different views. 

Oliveira Pinto believes his studies have improved his abilities as a diplomat, one of the reasons he’s excited about his eventual return to the public service. “I’ll return to government service armed with the skills the DEDP program and the research conducted during my capstone project have provided,” he says. “My job as a diplomat is to seek opportunities to connect with different people, investigate carefully, and find common ground,” work for which his DEDP MicroMasters and master’s studies have helped prepare him.

Completing his capstone, Oliveira Pinto hopes to draw lessons from J-PAL’s work with governments to improve constituents' quality of life. He’s helping generate case studies that may foster future collaborations between researchers and the public sector. 

“Work like this can be a good opportunity for governments interested in a research-supported, data-driven approach to policymaking,” he says. 

© Photo: Hanley Valentin

“I’ll return to government service armed with the skills the DEDP program and the research conducted during my capstone project have provided,” master's student Davi Augusto Oliveira Pinto says.

Celebrate Cambridge’s iconic landmarks and uncover new treasures this September at Open Cambridge

A group of people walk up to a radio telescope

With over 70 drop-in and bookable events, Open Cambridge encourages people to discover more about their local history and communities. Taking place over 10 days, here is a preview of some of the events taking place. 

Experience 2 iconic Cambridge sites this September by booking on to guided tours of the Mullard Radio Astronomy Observatory (MRAO) and the University of Cambridge’s Senate House. At MRAO, discover more about mysterious dishes which are dotted over the Cambridgeshire countryside. You’ll get up close to the One-Mile Telescope, 5-km Ryle Telescope, and the Arcminute Microkelvin Imager as well as see inside some of the control rooms and learn about the unique history of the site and some of the important discoveries made here. In the tours of Senate House, led by the University’s Ceremonial Officer, find out what goes on in this Grade 1 listed building during graduations as well as some of the incredible history to which the building has played host.  

Learn about the experiences of over 2000 Cambridgeshire soldiers who were sent last minute by Churchill to the failed defence of Singapore in WWII in a special talk by Lewis Herbert, former Leader of Cambridge City Council. On the 80th anniversary of the release from Japanese Army slavery of our Far East Prisoners of War (FEPOWs) in September 1945, this talk will pay tribute to them and their families, particularly over 800 locally who never made in home – over 4 in every 10.

This year marks 250 years since the birth of Jane Austen and to celebrate, King’s College Library and Archives are hosting an exhibition showcasing first and early editions of the author’s much-loved novels, alongside the autograph manuscript of her unfinished novel Sanditon and treasures highlighting the Austen family’s connection with the College. This one-day event is a rare opportunity to look inside the College’s beautiful early nineteenth-century library designed by the architect William Wilkins.

Back in May, The Sainsbury Laboratory here in Cambridge were part of a team winning a silver-gilt medal at the RHS Chelsea Flower Show. For Open Cambridge, enjoy a behind-the-scenes tour of the lab, see some of the award-winning display and have a go at some of the interactive activities the team took to Chelsea.

Try your hand at the world’s fastest growing sport, Padel, in a free 55-minute taster session at the Cambridge University Sports Centre. A fun, sociable mix of tennis and squash, each session is led by a qualified coach and great for beginners, so you’ll learn the rules, try out some shots, and experience what makes padel so popular.

Cambridge Samaritans will be joining Open Cambridge for the first time this year. For over 60 years, they have been there – day or night – for anyone struggling to cope or in distress, offering a safe space to talk without judgement or pressure. Join a special online event to find out more about the work the charity is doing on the helplines and in the local community and discover Samaritans’ unique approach to supporting those in emotional distress and our work in reducing the number of suicides.

Also, in the programme for the first time, are 2 tours of the Biomedical Campus. The first, delivered by Sociologists and residents David Skinner and Will Brown, considers the past, present, and future of the Campus from the perspective of the people who live around it. 

The second tour will explore landmark institutions like Addenbrooke’s and Royal Papworth Hospitals, the Laboratory of Molecular Biology, and AstraZeneca’s global HQ as well as give visitors the opportunity to learn about the upcoming Cancer and Children’s Hospitals, world-first surgeries, and the collaborative spirit that drives breakthroughs from bench to bedside. 

Zoe Smith, Open Cambridge Manager, said: “Each year we’re blessed with such an incredible and unique programme of events. From garden and walking tours, to learning more about some of the amazing work our local community organisations undertake, this year’s programme is opening doors to the residents of Cambridge”. 

Jo McPhee, Civic Engagement Manager at the University of Cambridge, said: “Open Cambridge is a great way for our University and local communities to come together and celebrate our shared history and incredible stories behind our spaces, places and people. This year’s programme is full of exciting events that bring those stories to life, showcasing the the depth and diversity of our collective heritage.” 

You can view the full Open Cambridge programme on our website.

Open Cambridge is part of the national Heritage Open Days. It is designed to offer special access to places that are normally closed to the public or charge admission. The initiative provides an annual opportunity for people to discover the local history and heritage of their community. Open Cambridge is run by the Public Engagement team at the University of Cambridge, who also deliver the Cambridge Festival, which takes place each spring. 

Bookings are now open for Open Cambridge 2025 (12-21 September). This September the public can enjoy tours of College gardens, exhibitions from hidden archives, tours of University sites not usually open to the public as well as open sites across the city, all free of charge.

A group of people walk up to a radio telescope

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

NUS students raise close to S$260,000 for charities through NUSSU RAG & Flag 2025

Students from the National University of Singapore (NUS) have raised close to S$260,000 through this year’s Receiving and Giving (RAG) and Flag 2025, organised by the NUS Students’ Union (NUSSU). The funds, contributed by generous donors across the NUS community, corporate partners, and members of the public, will support critical programmes under 16 Social Service Agencies (SSAs) supported by Community Chest. All funds raised will go directly towards empowering communities in need through the Community Chest.

Now in its 67th edition, RAG & Flag continues a proud NUS tradition established in 1958 to inspire student-led giving and service to the wider community. This year’s efforts take on special significance as NUS celebrates its 120th anniversary, alongside Singapore’s 60th birthday—a dual milestone that underscores a shared commitment to unity and nation-building.

Over the past 16 years, NUS students have raised a cumulative total of nearly S$4.9 million through this annual initiative.

Unity, creativity, and compassion in action

This year’s RAG & Flag theme, “Connecting Beyond Horizons”, called on students to transcend physical, social and personal boundaries to bring about meaningful change.

More than 1,500 students participated in two Flag Days held on 26 July and 2 August 2025, fanning out across Singapore with donation tins to raise awareness and funds for their selected causes. As a gesture of appreciation to the community for their generous support, the initiative culminated in today’s RAG Day showcase at University Town Green, where 2,000 students took centre stage in 14 spirited performances, complete with handmade costumes and mobile floats built using recycled materials as part of a zero-waste challenge.

The event was graced by Mr Ong Ye Kung, Minister for Health and Coordinating Minister for Social Policies, as Guest-of-Honour.

Speaking at NUSSU RAG and Flag Day 2025, NUS President Professor Tan Eng Chye said, “This year’s RAG and Flag keeps alive the time-honoured tradition of student-led fundraising and the exciting showcase of student performances as a vibrant finale to freshmen orientation season. As we continue the fine legacy of giving back to the community and encourage our students to make positive footprints on society, the vivid stories brought to life through the performances remind us that we are vitally connected, and together, we can inspire hope, uplift communities and drive meaningful change. We look forward to starting the new Academic Year with meaningful anticipation and purpose as NUS celebrates our 120th anniversary and Singapore marks 60 years of nation-building. As a university founded in 1905 by the community, for the community, we will continue to serve our country and society with dedication and passion.”

Adding to the festivities this afternoon was a lively Flag Carnival featuring food, games, and handmade crafts by student groups and hostels, alongside a performance by NUS Cheerleading team, Alpha Verve.

Mr Sean Pang, President of the 46th Executive Committee (EXCO) of NUSSU, said, “Building on this year’s theme, “Connecting Beyond Horizons”, we pushed boundaries and explored new ways to improve RAG and Flag. Leveraging digital payments, we customised 22 PayNow QR codes and collaborated with both NUS Sustainability Strategy Unit and NUS SAVE (Students’ Association for Visions of the Earth) to bolster our recycling efforts. All these would not have been possible without the hard work of our fellow students in each participating body, the RAG and Flag Committee, internal and external stakeholders, and the public who have supported us every step of the way.”

The proceeds from NUSSU RAG and Flag 2025 will be channelled through the Community Chest to benefit critical programmes under the following SSAs:

  1. SHINE Children and Youth Services
  2. Care Community Services Society
  3. MINDS
  4. Autism Resource Centre
  5. Children-at-Risk Empowerment Association (CARE) Singapore
  6. Singapore Children’s Society
  7. Asian Women’s Welfare Association
  8. Rainbow Centre
  9. Fei Yue Family Service Centre
  10. Samaritans of Singapore
  11. Club HEAL
  12. Fei Yue Community Services
  13. O’Joy Care Services
  14. Children’s Cancer Foundation
  15. Singapore Association of the Visually Handicapped (SAVH)
  16. Life Community Services Society (LCSS)

NUS students celebrate tradition to uplift communities through creativity and service at NUSSU RAG & Flag Day 2025

From cheering crowds to intricately crafted floats, NUSSU RAG & Flag Day 2025 lit up University Town Green on 8 August 2025, celebrating the enduring spirit of creativity, connection, and community at NUS.

The lively showcase marked the culmination of the 67th edition of NUSSU RAG & Flag Day —one of NUS’ longest-standing student-led traditions, which dates back to 1958. Themed “Connecting Beyond Horizons”, this year’s event paid tribute to two milestone anniversaries—NUS’ 120th anniversary and Singapore’s 60th birthday—while rallying students to build bridges, uplift lives, and give back meaningfully to society to bring hope and transformation to the future.

A legacy of giving

The RAG & Flag tradition started with two Flag Days on 26 July and 2 August, where more than 1,500 NUS students fanned out across Singapore with donation tins to raise funds in aid of critical programmes run by 16 Social Service Agencies (SSAs) that are supported by Community Chest. They raised close to $260,000 this year, with all funds going directly towards uplifting communities in need through the Community Chest. The amount raised would also enable S$1.50 government matching for every dollar donated, yielding even more funds for the beneficiaries.

Over the past 16 years, the initiative has collectively raised nearly S$4.9 million reflecting the enduring impact of student-led service at NUS.

For Sivakumar Nandhana, a first-year student at the NUS Department of Pharmacy and Pharmaceutical Sciences, Flag Day offered a moment of reflection and purpose. “We often focus on our careers and personal lives, rarely stopping to consider what others in our community need. Flag Day is my little way of helping those in my community It is also an opportunity to meet like-minded people who also care for the society,” she said.

Li Jie, from the NUS Faculty of Dentistry, added, “It’s really heartwarming to see people donate so willingly and I enjoy being a part of such a meaningful activity.”

A vibrant showcase of gratitude and sustainability

As part of RAG Day, nearly 2,000 students from across the University staged 14 dazzling performances to express their gratitude to the community for its generous support. Each performance was choreographed and produced entirely by students, featuring handmade costumes, mobile floats, and thematic props constructed primarily from recycled materials—in line with the zero-waste challenge promoting sustainable practices.

Gracing the event as Guest-of-Honour was Mr Ong Ye Kung, Minister for Health and Coordinating Minister for Social Policies.

Speaking at the official launch of NUSSU RAG and Flag Day 2025, NUS President Professor Tan Eng Chye commended the passion, creativity and dedication of NUS students in upholding this proud university tradition.

“This year’s RAG and Flag keeps alive the time-honoured tradition of student-led fundraising and the exciting showcase of student performances as a vibrant finale to freshmen orientation season. As we continue the fine legacy of giving back to the community and encourage our students to make positive footprints on society, the vivid stories brought to life through the performances remind us that we are vitally connected, and together, we can inspire hope, uplift communities and drive meaningful change. We look forward to starting the new Academic Year with meaningful anticipation and purpose as NUS celebrates our 120th anniversary and Singapore marks 60 years of nation-building. As a university founded in 1905 by the community, for the community, we will continue to serve our country and society with dedication and passion,” he said.

Behind the scenes, each performance was the result of months of planning, collaboration and tireless effort. Student teams poured their creativity into every detail—from float engineering and set design to costume-making and lighting—all while embracing sustainable practices.

Jasper Ang, a Year 2 Computer Science student and Creative Director for this year’s Raffles Hall RAG said, “From floats to costumes, everything was upcycled. Materials that couldn’t be used for design were repurposed for functionality, such as linings and backings. In Hall, we often have events involving cooking, so we incorporated egg cartons that we had collected, to add texture to our float. We also reused fabric from last year, and some hallmates even contributed their old clothes, giving the costumes a unique and personal flair."

For their creativity in transforming discarded material into a float masterpiece, Raffles Hall received a Zero Waste Effort Award, along with Faculty of Science.

The floats and props, crafted largely from recycled and repurposed materials, served as powerful visual anchors for the performances. More than stage décor, they told meaningful stories of resilience, hope and unity. The performers in handcrafted costumes brought each segment to life in a vibrant fusion of artistry, sustainability and purpose.

Speaking of his experience participating in RAG and Flag as a float builder, first-year NUS Law student Matthew Lim said, “You never know where you would go, if you don’t go there. I loved being part of RAG – I found new perspective and friends!”.

A celebration of community

Alongside the main showcase, a lively Flag Carnival brought the campus to life with games, food and handicraft booths run by student groups and hostels. An energetic performance by the NUS Cheerleading Team, Alpha Verve, added to the festive spirit.

Mr Sean Pang, President of the 46th Executive Committee of NUSSU, shared, “Building on this year’s theme, “Connecting Beyond Horizons”, we pushed boundaries and explored new ways to improve RAG and Flag. Leveraging digital payments, we customised 22 PayNow QR codes and collaborated with both NUS Sustainability Strategy Unit and NUS SAVE (Students’ Association for Visions of the Earth) to bolster our recycling efforts. All these would not have been possible without the hard work of our fellow students in each participating body, the RAG and Flag Committee, internal and external stakeholders, and the public who have supported us every step of the way.”

Highlights from RAG & Flag 2025

Overall RAG & Flag winner: 

  • Yong Loo Lin School of Medicine

Zero Waste Best Effort Award: 

  • Pharmaceutical Society

Zero Waste Effort Award:

  • Faculty of Science
  • Raffles Hall

RAG Gold:

  • Faculty of Dentistry
  • Faculty of Science
  • Raffles Hall
  • School of Business
  • Yong Loo Lin School of Medicine

RAG Silver:

  • College of Design and Engineering
  • Kent Ridge Hall and Sheares Hall (KRaSheares)
  • Faculty of Arts and Social Sciences
  • Faculty of Law
  • Pharmaceutical Society

RAG Bronze:

  • Eusoff Hall
  • King Edward VII and Pioneer House (KExPH)
  • School of Computing
  • Temasek Hall

Flag Gold:

  • Faculty of Law
  • Kent Ridge Hall
  • Residential College 4
  • School of Business
  • School of Computing
  • Yong Loo Lin School of Medicine

Flag Silver:

  • Faculty of Arts and Social Sciences 
  • Faculty of Dentistry
  • Pharmaceutical Society
  • Tembusu College

Flag Bronze:

  • College of Alice and Peter Tan 
  • College of Design and Engineering 
  • Eusoff Hall
  • Faculty of Science
  • Helix House
  • King Edward VII Hall
  • Pioneer House
  • Raffles Hall
  • Ridge View Residential College
  • Sheares Hall
  • Temasek Hall

See full press release here.

Funding cuts upend projects piecing together saga of human history

Christina Warinner

Christina Warinner.

Stephanie Mitchell/Harvard Staff Photographer

Campus & Community

Funding cuts upend projects piecing together saga of human history

Ancient DNA expert Christina Warinner notes losses come just as innovations are driving major advances in field

Christy DeSmith

Harvard Staff Writer

6 min read

In February, Christina Warinner, M.A ’08, Ph.D. ’10, was accepting an award from the American Association for the Advancement of Science when she learned that one of her projects was on a list circulating in Washington of targeted federal research grants. A couple of months later, she appeared in Stockholm at a Nobel symposium and lost two National Science Foundation grants over the span of two weeks.

Warinner, Landon T. Clay Professor of Scientific Archaeology, is well-known in the field of ancient DNA, with her pioneering methods cracking several mysteries concerning early human diets and health. Hers were among the more than 1,600 NSF grants for active projects that were terminated in the spring.

“I recognize it can be hard to compare this work with medical research, which has such obvious applications for saving lives,” Warinner said. “But people also have a deep curiosity about who we are and where we come from. Our work is important because it uses our most powerful technologies to reveal how we, as humans, lived thousands of years ago so that we may better understand our world today.”

The cuts come at a critical time for practitioners of ancient DNA science, a discipline in rapid ascent due to recent advances in lab techniques and computing power. The multidisciplinary field got its start in the mid-1980s in the United States, but support here for the work has lagged behind Northern Europe during the 21st century.

“It’s just really sad,” Warinner said. “American archaeologists have been leaders in telling the stories of humankind. But if our funding is removed, we won’t be leaders anymore.”

“American archaeologists have been leaders in telling the stories of humankind. But if our funding is removed, we won’t be leaders anymore.”

Christina Warinner

The annual meeting of the American Association for the Advancement of Science (AAAS) was supposed to be a joyful occasion for Warinner, who was presented with its 2025 Robert W. Sussman Award for Scientific Contributions to Anthropology.

At the Boston reception, a fellow researcher told Warinner one of her major projects was on a database of recommended research cuts. She and her team have been in the thick of a three-year inquiry into the diplomatic role of marriage and extended kin networks in connecting ancient Maya kingdoms along a major river valley in Belize.

It’s one of the most intensely studied corners of the ancient Maya, yielding more than a century’s worth of archaeological discovery.

Cracking the civilization’s elaborate hieroglyphic script, with key breakthroughs made at Harvard in the 1950s, clarified the importance of intermarriage to maintaining inter-kingdom relations. Recent innovations in remote sensing helped researchers uncover a string of previously unknown settlements in a densely forested area known over thousands of years for its cacao harvests.

Was the Belize River Valley more tightly knit with cross-community relations than previously thought? Warinner and her collaborators were on the cusp of finding out.

“The genetic data would really help us tie it all together, to really understand how the ancient Maya political system worked,” she said.

Only in the last five or six years has such a revelation become possible with advancements in sequencing ancient genomes from hot, humid climates, where DNA is far quicker to deteriorate.

Researchers in Belize and at Harvard extracted genetic data from 400 individuals who inhabited the valley over hundreds of years, between 300 B.C.E. and 1000 A.D. To Warinner’s surprise, nearly all, sourced from newly identified sites as well as decades-old excavations, generated at least partial genomes.

“We never anticipated such a high success rate,” shared Warinner, a native Midwesterner who has been studying ancient Maya since her undergraduate years at the University of Kansas. “It’s wonderful. But it also makes our project more expensive than we originally budgeted.”

A May 15 letter canceling the project’s NSF funding dealt an unexpected second blow. Also lost was support for newer research on the practice of horse milking, with recent findings suggesting its origins may be close in age to horse domestication itself.

“Modern society was literally built on the backs of horses,” Warinner said. “But many people are surprised to learn that early domesticated horses were milked. We still don’t know where or when this practice began — that’s something we wanted to trace, to better understand these very earliest human-horse relationships.”

As an ancient DNA expert and also group leader at Germany’s Max Planck Institute, she had been invited by the Nobel Committee to present her work on ancient microbes at a Nobel Symposium on Paleogenomics. Warinner presented May 28 on the archaeology of infectious diseases, the history of fermented foods, and the evolution of the human microbiome.

The topic of horse milking fits squarely with this research focus. Of longstanding interest to Warinner is how milk and dairy products became dietary staples in a world where most are lactose-intolerant. “They are some of our oldest — and least-understood — manufactured foods,” she marveled.

Koumiss, a fermented beverage still popular in Central Asia, makes for a particularly fascinating case study. Made from horse milk, it hails from the very region where horse domestication is believed to have started more than 4,000 years ago. In fact, the mildly alcoholic drink is known to have fueled some of the great Eurasian nomadic empires, including the Mongols and the Xiongnu.

“The whole reason we have undertaken this project is because we believe it is important for understanding human history.”

Christina Warinner

Warinner and her collaborators proposed a novel approach to identifying when, and where, these grassland dwellers got their first sips of koumiss. As a postdoctoral researcher at the University of Oklahoma, she was among the first to recognize that dental tartar functions could be a goldmine for archaeological scientists. The calcified buildup, she found, entraps and preserves biomolecules like DNA as well as proteins, providing unique insights into ancient diets.

Learning about the emergence of koumiss — or raw horse milk, for that matter — meant collaborating with researchers across Central Asia to perform dental cleanings on their archaeological collections.

“The whole reason we have undertaken this project is because we believe it is important for understanding human history,” Warinner offered. “Our grant proposal was successful because a panel of peer reviewers agreed, deeming our research vital science of high priority.

“It’s such an honor,” she added, “to receive funding this way.”

MIT School of Engineering faculty receive awards in spring 2025

Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, labs, and centers. The following individuals were recognized in spring 2025:

Markus Buehler, the Jerry McAfee (1940) Professor in Engineering in the Department of Civil and Environmental Engineering, received the Washington Award. The award honors engineers whose professional attainments have preeminently advanced the welfare of humankind.

Sili Deng, an associate professor in the Department of Mechanical Engineering, received the 2025 Hiroshi Tsuji Early Career Researcher Award. The award recognizes excellence in fundamental or applied combustion science research. Deng was honored for her work on energy conversion and storage, including combustion fundamentals, data-driven modeling of reacting flows, carbon-neutral energetic materials, and flame synthesis of materials for catalysis and energy storage.

Jonathan How, the Richard Cockburn Maclaurin Professor in Aeronautics and Astronautics, received the IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award. The award recognizes the best paper published annually in the IEEE Transactions on Robotics for technical merit, originality, potential impact, clarity, and practical significance.

Richard Linares, the Rockwell International Career Development Professor in the Department of Aeronautics and Astronautics, received the 2024 American Astronautical Society Emerging Astrodynamicist Award. The award honors junior researchers making significant contributions to the field of astrodynamics.

Youssef Marzouk, the Breene M. Kerr (1951) Professor in the Department of Aeronautics and Astronautics, was named a fellow of the Society for Industrial and Applied Mathematics. He was honored for influential contributions to multiple aspects of uncertainty quantification, particularly Bayesian computation and measure transport.

Dava Newman, the director of the MIT Media Lab and the Apollo Program Professor in the Department of Aeronautics and Astronautics, received the Carolyn “Bo” Aldigé Visionary Award. The award was presented in recognition of the MIT Media Lab's women’s health program, WHx, for groundbreaking research in advancing women’s health.

Martin Rinard, a professor in the Department of Electrical Engineering and Computer Science, received the 2025 SIGSOFT Outstanding Research Award. The award recognizes his fundamental contributions in pioneering the new fields of program repair and approximate computing.

Franz-Josef Ulm, the Class of 1922 Professor in the Department of Civil and Environmental Engineering, was named an ASCE Distinguished Member. He was recognized for contributions to the nano- and micromechanics of heterogeneous materials, including cement, concrete, rock, and bone, with applications in sustainable infrastructure, underground energy harvesting, and human health.

© Photo: Lillie Paquette

Eight members of the MIT engineering faculty received awards in spring 2025.

Crossing borders, shaping minds: Asian Undergraduate Symposium marks a decade of nurturing youth leaders

It all started in 2015 as an ambitious student-led international programme that rallied about 100 ASEAN undergraduates to tackle real-world challenges faced by communities in the region. Ten years later, the Asian Undergraduate Symposium (AUS) has grown into an impactful platform that has connected more than 2,000 youth leaders and changemakers across Asia.

Over the past decade, it has brought together students from 90 universities across 17 Asian nations and become a fixture of the ASEAN University Network’s AUN Summer Camp Programme, contributing to efforts by universities across the region to equip ASEAN students with the competencies and knowledge to be leading global talents.

The programme’s success has exceeded expectations for founding chairperson Goh Seng Chiy, who graduated from the former University Scholars Programme with a degree in Chemical Engineering in 2017. “People would only continue something like this if they see value in it, and I’m glad to see that people feel what we started has value,” Mr Goh reflected.

He hopes that a key takeaway for participants will remain the cross-cultural friendships and perspectives they gain over the two weeks of mingling with regional counterparts. “You may not remember the specific topics, but if the way you think about the region has evolved so you understand the challenges, constraints and decisions of our ASEAN neighbours better, that’s a big mindset shift that will stick,” he said. “Like a university education, the impact may not be easily quantifiable, but the benefit lies in how it shapes your worldview and your thinking.”

The 2025 AUS, which took place from 7–19 July 2025, was jointly organised by NUS and NUS College (NUSC), the honours college of NUS. The 10th anniversary edition of AUS brought together more than 300 students from 55 universities across Asia to collectively envision and shape a better future. It had a main theme of Interconnected Communities and featured three subthemes of Sustainability and Regeneration; Diversity, Equity and Inclusion; and Heritage and Culture.

Designs that prioritise life

AUS 2025 broadened its sustainability agenda by including the concept of regeneration, a dimension of sustainability that aims to go beyond halting environmental degradation to restoring and improving ecosystems, and encouraging participants to incorporate it into their project proposals.

“Regeneration isn’t new. Across Southeast Asia, our ancestors have done this before,” said Ms Bernise Ang, Principal and Chief Alchemist at Zeroth Labs and an adjunct lecturer at NUSC, who cited the subak system of water management in Bali as an example of a traditional solution that exemplifies regeneration. Under this system, rice farmers meet regularly at water temples to coordinate their planting schedules in a way that ensures sufficient water for the crops while also controlling pest populations.

This practice was disrupted in the 1970s by the “Green Revolution”, a global push to boost food production with high-yield seeds, fertilisers and pesticides. Although such efforts temporarily yielded more crops, the negative repercussions emerged within just a few seasons farmers found themselves grappling with pollution, soil degradation, water shortages and exploding pest populations. The Indonesian government eventually recognised that the new methods were unsustainable and ordered a return to the subak method that allowed people and nature to exist in harmony.

Said Ms Ang: “Every system we’ve inherited was once imagined, which means we can imagine things differently. Regeneration means remembering that we belong to living systems and can design as if life matters.”

The inaugural Regen Asia Summit (RAS), an intensive two-day programme held immediately before AUS, gave some 200 AUS participants a head start on understanding regeneration and the approaches needed to design effective regenerative solutions. Their knowledge was further augmented in an AUS workshop titled “From RAS to AUS: Towards Regenerative Action” that ensured all attendees at the symposium had a shared understanding of the concept and how to apply it.

The power of collective action

Through presentations and dialogues with expert practitioners and learning journeys to local communities, participants learnt that implementing regenerative solutions involves efforts from all quarters, including individual consumers, community groups, governments and international bodies.

Dr Farrah Shameen Ashray, CEO of the Malaysian Timber Certification Council, gave participants an overview of green product certification, which businesses earn by meeting certain requirements for economic viability, environmental protection and social responsibility. She urged participants to seek out certified products and explore how they can contribute to the work of product certification through their chosen fields of study.

“Certification is about compliance with laws and regulations, and the compliance sector needs auditors, social impact assessors, sociologists, safety and health officers, lawyers and more. You can make an impact that will be transferred through the certification process to the products and businesses behind them,” said Dr Farrah.

Dr Supatchaya “Ann” Techachoochert, senior environmental manager at Mae Fah Luang Foundation in Thailand, explained how one of the foundation’s projects revitalised the heavily deforested Doi Tung area in Chiang Rai by offering villagers sustainable jobs in horticulture, handicrafts and reforestation. This incentivised them to stop clearing trees to plant low-value crops and instead rebuild their environment and economy. Over 30 years, their efforts tripled the region’s economic value and restored enough forest to generate 220 tonnes of carbon credits, which the foundation uses to fund more restoration efforts.

Associate Professor Adrian Loo, who is the deputy director of the NUS Centre for Nature-based Climate Solutions, shared stories of flora and fauna in Singapore that benefitted from small but impactful interventions by conservation groups, such as rope bridges that reduced mortality rates among Raffles’ banded langur monkeys by enabling them to safely cross roads between forested areas. He said, “It’s a small gesture, but that’s all nature needs – a small gesture in the right direction.”

While the impact is not always immediately visible, regenerative efforts for one species often create positive ripples for others. Marine biologist Sam Shu Qin, who also leads marine conservation-related projects and overseas trips as an educator at NUSC, dives daily in Singapore waters for coral restoration work like relocating corals from areas slated for development and monitoring their health and growth. She shared a video of a pair of cuttlefish laying eggs in an artificial reef structure under her team’s care, demonstrating the broader impact of coral restoration efforts on the marine ecosystem.

Translating lessons to solutions

Working in interdisciplinary and international teams, the students used their newfound knowledge to develop 40 project proposals for solving challenges in their communities. They presented their solutions in a session called Project World Café on 16 July, gathering feedback from their peers to further refine the proposals.

Many groups that tackled environmental issues incorporated regeneration into their solutions, such as a project on wild bee conservation in Cambodia that included plans for bee home kits to support the insects’ population growth. Another project proposed using bokashi balls made from soil and beneficial microorganisms to revive biologically dead rivers in the Philippines.

For Tiffany Cloa, a participant from the University of the Philippines, the collaborative experience was valuable in exposing her to a range of perspectives and providing opportunities for personal growth and reflection. “We have very different opinions, which is not a bad thing. It made me realise that we are all in our own vacuums, and I had to learn to navigate the paradox between discussing First World ideas of AI and technology and comparing that to what people in my country are going through, where they can’t even eat three meals a day.”

Following the symposium, the projects will be evaluated by an expert panel, which will select up to 10 of the most promising projects to receive a seed grant of S$5,000 to turn their concepts into reality in their communities of choice. Starting this year, these selected projects also stand to benefit from a new Impact Incubation Programme by NUSC, which will offer additional financial support, training and mentorship to help the teams go further in their efforts to bring positive change to communities.

Read the press release here.

Cultivating the next generation of global entrepreneurs through the NUS Enterprise Summer Programme in Entrepreneurship

Through the perspectives of NUS students Neryss Ho and Cheow Jun Wei, we delve into how the transformative two-week NUS Enterprise Summer Programme in Entrepreneurship sparked their entrepreneurial passions, honed their pitching skills, embarked on immersive learning journeys, and forged meaningful connections with both local and international peers. 

Fostering an entrepreneurial mindset

All participants were challenged to tackle real-world problems faced by companies, from sustainable supply chains to tech-driven social impact, through intensive workshops and team pitches.

As shaping the entrepreneurial mindset was crucial at the start, the opening ceremony included a fireside chat with Mr Jeffrey Tiong, founder of unicorn start-up PatSnap. Recalling his journey from humble beginnings to scaling a global innovation platform, he highlighted invaluable lessons he had acquired on resilience, reinvention, and radically honest leadership.

The fireside chat was one of the sessions where students could pick up business insights, and put this into practice over the course of the programme. On Demo Day in week 2, they presented their ideas applying what they had learned, to a panel of venture capitalists and industry experts.

Jun Wei, a Data Science and Analytics major from NUS College of Humanities and Sciences, drew inspiration from Mr Tiong as he recounted how, in his role as leader of the pitch team HEALTHLINK, he learned to reframe challenges as opportunities. Their team developed an innovative chatbot to enhance patient experiences on Singapore’s HealthHub platform. This effort not only earned them a first-prize win in their track but also left a strong impression on the judging panel.

“Our multicultural team consisted of Pim from Thailand, Toby and Ying Yue from China, Moza from the UAE, and me. While we all came from vastly different backgrounds, we each had complementary strengths which contributed to the team’s success,” he recounted fondly. “We bonded by sharing past project experiences, and I took the lead to structure our discussions.” 

Thriving in ASEAN’s start-up ecosystem

Week two culminated with ASEAN Day, a dedicated exploration of Southeast Asia’s dynamic entrepreneurial landscape. The centrepiece was a panel session on “Opportunities & Challenges in Southeast Asian Start-up Ecosystem”, where participants acquainted themselves with opportunities for growth and innovation in the fast-growing region.

The programme’s industry exposure took participants directly into Singapore’s thriving start-up ecosystem. Visits to BLOCK71 Singapore and interactions with founders from NUS start-ups like Carousell and ShopBack, revealed the practical steps to scaling ventures. In addition, a fireside chat with Mr Philip Yeo, Chairman of Economic Development Innovations Singapore, offered insights on talent development, navigating ASEAN’s US$600 billion (S$773 billion) digital economy, and how he strategised Singapore’s transformation into a global hub for technology, biomedicine, and innovation.

“If you want to do business here, learn the languages,” Mr Yeo emphasised, an essential starting point for any aspiring ASEAN entrepreneur.

Beyond business insights, the programme embraced ASEAN’s rich cultural tapestry through immersive activities that deepened participants’ regional understanding. Language learning segments introduced its linguistic diversity with lessons in Thai, Vietnamese, and Bahasa Indonesia. Hands-on workshops in traditional batik painting, angklung music, silat, and a traditional bamboo dance offered the participants authentic cultural experiences, while cookery sessions featuring traditional Southeast Asian dishes such as som tam (Thai papaya salad), gado-gado (Indonesian salad with peanut dressing), and Vietnamese spring rolls gave them a taste of the region’s flavours.

These cultural immersion activities proved more than recreational – they enhanced participants’ appreciation for ASEAN’s diverse heritage, along with their understanding of the business climate across Singapore and the wider Southeast Asia.

As a Student Ambassador and Computer Science major from NUS School of Computing, Neryss approached the programme with a unique lens – not just as a participant, but as someone helping to facilitate a cross-cultural connection.  She found new appreciation for Singapore’s multicultural identity while connecting with clanmates from the region as well.

She highlighted her visit to Carousell Singapore as one which reiterated the importance of local market strategies. “Carousell’s founders showed how understanding local nuances drives success,” she said, reflecting on how blending cultural understanding with entrepreneurial vision is key – in Singapore, in ASEAN, in the world.

“And the rest is how you pitch your product!”

Mastering the art of persuasive pitching and storytelling

Beyond technical skills and industry insights, the programme culminated in developing participants’ most crucial entrepreneurial skill: the ability to persuasively communicate their vision. Led by the dynamic Mr Kris Childress, Mentor-in-Residence of NUS Enterprise, the pitching workshop helped students learn how to articulate their value proposition with clarity and passion.

Together with his pitch team from HEALTHLINK, Jun Wei worked smoothly with zero friction. “What I felt we did right was that our complementary strengths drove collaboration,” he remarked.

A pivotal mentoring session with Ms Joyce Ng from iGlobe Partners prompted a fundamental shift to a healthcare provider-focused solution. “Joyce’s advice to always listen to teammates and end users reshaped my thinking and allowed me to think deeper about the solution,” Jun Wei reflected. 

On the morning of Demo Day, nerves ran high as the HEALTHLINK team was worried about the five-minute time limit, but their focus on presenting paid off. “When the results were out later in the day that our team had won, I jumped for joy in the midst of the finale ceremony, fuelled by pure adrenaline. The win, the personalised congratulatory message from Joyce, and the atmosphere of energetic celebrations made the hard work worth it,” he recalled.

Building ventures with purpose

At the heart of the programme was purpose-driven entrepreneurship, inspired by a vision of ventures that drive societal good. A panel with leaders from Dignity Kitchen and raiSE Singapore set souls ablaze, showing how profit and impact can intertwine. Hands-on prototyping sessions brought ideas to life, such as apps for inclusive education and urban sustainability solutions.

Beyond learning to build resilient and scalable ventures, the programme was also about crafting legacies that make a difference in society, using purpose to drive oneself. “Leadership is remembering why and for whom you started,” said Mr Koh Seng Choon, Founder of Dignity Kitchen.

Finding her entrepreneurial purpose through her StepUp team’s pitch, Neryss shared how StepUp, a gamified workshop to teach financial literacy to low-income secondary school students in Singapore, planned to expand across Southeast Asia, where income inequality remains prevalent. “Our team’s synergy was our unique selling point,” she shared, echoing judge Mr Prantik Mazumdar’s observation that a company’s direction may shift, testing the team’s ability to stay motivated.

“That advice led us to focus on financial literacy for secondary school students, simulating their daily choices between wants and needs, unlike broader gamified learning tools,” she said.

Celebrating a global entrepreneurial community

The programme culminated with a grand finale that perfectly captured the essence of the two-week journey. The event featured cultural performances by the students and Demo Day, where teams presented their polished start-up ideas to panels of industry judges and venture capitalists.

From gamified travel apps to sustainable tech solutions, the pitches showcased the programme’s impact on fostering creativity and ambition, as well as the aspiring founders behind the ideas that were brought to life. The event celebrated not only entrepreneurial achievements but also the global connections formed, with participants leaving as part of NUS Enterprise’s vibrant community.

“Regardless of where you’re from, I believe each of you is walking away with not only new entrepreneurial mindsets and skills, but also friendships and memories that I hope will last a lifetime,” Associate Professor Benjamin Tee told the students.

“Many of the founders said that this is an experience where one will experience many challenges and failures, and I want to test my resilience and meet all these ambitious, inspiring people along the way, as at the end of the programme, I realised that entrepreneurship is something that I really want to pursue,” shared Neryss reflecting on the days spent with new friends and the fast-paced life lessons she gained along the way.

Echoing her sentiments, Jun Wei added that the two weeks had taught him more than his previous work experiences. “It ignited a drive to own my ideas and learn from mistakes in the chaos of creation.”

A global network for future innovators

The NUS Enterprise Summer Programme in Entrepreneurship 2025 served as a transformative launchpad for aspiring entrepreneurs. As the students returned home, they took with them not just memories but essential tools, international networks, lasting friendships, and newfound confidence to build ventures that will shape the future.

 

By NUS Enterprise

NUS and NTU renew commitment to share high-value research facilities to advance scientific research in Singapore

To push the frontiers of scientific research in Singapore, the National University of Singapore (NUS) and Nanyang Technological University, Singapore (NTU Singapore) are renewing their commitment to share high-value research equipment and facilities.

Providing scientists from both universities with access to cutting-edge, multi-million dollar research infrastructure at NUS and NTU Singapore will foster deeper collaboration, enabling more joint research projects, co-authored publications, and stronger funding opportunities.

Such partnerships strengthen institutional ties and promote a culture of knowledge sharing. They broaden training and development by equipping students and researchers with specialised tools and cross-institutional expertise. This leads to higher-quality research, faster innovation, and more effective industry engagement.

NUS President Professor Tan Eng Chye said, “This is an excellent win-win partnership between NUS and NTU which serves as a force multiplier in amplifying our research capabilities and accelerating discoveries with greater scale and impact. Both universities hold complementary strengths in talent, innovation and infrastructure. By combining critical resources and expertise, we will be in a stronger place to accelerate scientific breakthroughs and drive real-world impact locally and internationally. Researchers from both universities are currently collaborating extensively on a wide variety of projects, and we are excited to see these efforts grow even further and champion new solutions to complex challenges.”

NTU President Professor Ho Teck Hua said, “As Singapore’s two largest universities, NTU and NUS compete on the global stage. By partnering with each other through shared research facilities, we are better positioned to enhance Singapore’s standing in global research.”

“Sharing high-end equipment empowers our scientists to increase the impactful research they do, fosters deeper research collaborations, and encourages mutual learning. It also helps us maximise efficiencies in utilising research infrastructure,” added Prof Ho.

At NUS, NTU researchers can access advanced tools such as the Invizo 6000 3D Atom Probe microscope – one of seven in the world and the first in ASEAN – which allows 3D imaging and chemical analysis of materials at the atomic level. The equipment allows atom probe tomography to be carried out, which is especially useful for studying how elements are distributed in semiconductor devices, the structure of advanced alloys, and how atoms move in energy materials used in batteries and fuel cells. Its high precision makes it a key tool for developing next-generation materials and devices.

High-end equipment in NTU available to NUS scientists includes an ultra-powerful microscope, called an aberration-corrected transmission electron microscope with energy dispersive X-ray spectroscopy, electron energy loss spectroscopy, and holography capabilities. It allows researchers to clearly see single columns of atoms in a material at high resolution, identify what elements it is made of, understand the properties of the bonds between the atoms, and visualise the invisible electric and magnetic fields around them. With the microscope, scientists can study materials to make better quantum computers, design more effective nanoparticles for medical diagnoses and drug treatment, as well as develop novel materials for construction and manufacturing.

The sharing arrangement builds on existing research and innovation partnerships between NUS and NTU, including jointly leading research for the Sustainable Tropical Data Centre Testbed, the world’s first testbed in the tropics to advance energy-efficient data centre cooling solutions.

The two universities, together with global investment company Temasek, have also embarked on a joint pilot programme to accelerate the creation of successful deep-tech start-ups from the pipeline of research at NUS and NTU.

MIT documentary “That Creative Spark” wins New England Emmy Award

Enter the basement in one of MIT’s iconic buildings and you’ll find students hammering on anvils and forging red-hot metal into blades. This hands-on lesson in metallurgy is captured in the documentary “That Creative Spark,” which won an Emmy Award for the Education/Schools category at the 48th annual Boston/New England Emmy Awards Ceremony held in Boston in June.

“It’s wonderful to be recognized for the work that we do,” says Clayton Hainsworth, director of MIT Video Productions at MIT Open Learning. “We’re lucky to have incredible people who have decided to bring their outstanding talents here in order to tell MIT’s stories.”

The National Academy of Television Arts and Sciences Boston/New England Chapter recently honored Hainsworth, the documentary’s executive producer; Joe McMaster, director/producer; and Wesley Richardson, cinematographer.

“That Creative Spark” spotlights a series of 2024 Independent Activities Period (IAP) classes about bladesmithing, guest-taught by Bob Kramer, a world-renowned maker of hand-forged knives. In just one week, students learned how to grind, forge, and temper blocks of steel into knives sharp enough to slice through a sheet of paper without resistance.

“It’s an incredibly physical task of making something out of metal,” says McMaster, senior producer for MIT Video Productions. He says this tangible example of hands-on learning “epitomized the MIT motto of ‘mens et manus’ [‘mind and hand’].”

The IAP Bladesmithing with Bob Kramer course allowed students to see concepts and techniques like conductivity and pattern welding in action. Abhi Ratna Sharda, a PhD student at the Department of Materials Science and Engineering (DMSE), still recalls the feeling of metal changing as he worked on it.

“Those are things that you can be informed about through readings and textbooks, but the actual experience of doing them leaves an intuition you’re not quick to forget,” Sharda says.

Filming in the forge — the Merton C. Flemings Materials Processing Laboratory — is not an experience the MIT Video Productions team will be quick to forget, either. Richardson, field production videographer at MIT Video Productions, held the camera just six feet away from red-hot blades being dipped into tubs of oil, creating minor fireballs and plumes of smoke.

“It’s intriguing to see the dexterity that the students have around working with their hands with very dangerous objects in close proximity to each other,” says Richardson. “Students were able to get down to these really precise knives at the end of the class.”

Some people may be surprised to learn that MIT has a working forge, but metalworking is a long tradition at the Institute. In the documentary, Yet-Ming Chiang, Kyocera Professor of Ceramics at DMSE, points out a clue hidden in plain sight: “If you look at the MIT logo, there’s a blacksmith, and ‘mens et manus’ — ‘mind and hand,’” says Chiang, referring to the Institute’s official seal, adopted in 1894. “So the teaching and the practice of working with metals has been an important part of our department for a long time.”

Chiang invited Kramer to be a guest instructor and lecturer for two reasons: Kramer is an industry expert, and he achieved success through hands-on learning — an integral part of an MIT education. After dropping out of college and joining the circus, Kramer later gained practical experience in service-industry kitchens and eventually became one of just 120 Master Bladesmiths in the United States today.

“This nontraditional journey of Bob’s inspires students to think about projects and problems in different ways,” Hainsworth says.

Sharda, for example, is applying the pattern welding process he learned from Kramer in both his PhD program and his recreational jewelry making. The effect creates striking visuals — from starbursts to swirls looking like agate geodes, and more — that extend all the way through the steel, not just the surface of the blade.

“A lot of my research has to do with bonding metals and bonding dissimilar metals, which is the foundation for pattern welding,” Sharda says, adding how this technique has many potential industrial applications. He compares it to the mokume-gane technique used with precious metals, a practice he encountered while researching solid-state welding methods.

“Seeing that executed in a space where it’s very difficult to achieve that level of precision — it inspired me to polish all the tightest nooks and crannies of the pieces I make, and make sure everything is as flawless as possible,” Sharda adds.

In the documentary, Kramer reflects on his month of teaching experience: “When you give someone the opportunity and guide them to actually make something with their hands, there’s very few things that are as satisfying as that.”

In addition to highlighting MIT’s hands-on approach to teaching, “That Creative Spark” showcases the depth of its unique learning experiences.

“There are many sides to MIT in terms of what the students are actually given access to and able to do,” says Richardson. “There is no one face of MIT, because they're highly gifted, highly talented, and often those talents and gifts extend beyond their courses of study.”

That message resonates with Chiang, who says the class underscores the importance of hands-on, experimental research in higher education.

“What I think is a real benefit in experimental research is the physical understanding of how objects and forces relate to each other,” he says. “This kind of class helps students — especially students who’ve never had that experience, never had a job that requires real hands-on work — gain an understanding of those relationships.”

Hainsworth says it’s wonderful to collaborate with his team to tell stories about the spirit and generosity of Institute faculty, guest speakers, and students. The documentary was made possible, in part, thanks to the generous support of A. Neil Pappalardo ’64 and Jane Pappalardo.

“It really is a joy to come in every day and collaborate with people who care deeply about the work they do,” Hainsworth says. “And to be recognized with an Emmy, that is very rewarding.”

Jason Sparapani contributed to this story.

© Photo: Eric Antoniou

Left to right: MIT Open Learning’s Joe McMaster, Wesley Richardson, and Clayton Hainsworth celebrate at the 2025 Boston/New England Emmy Awards ceremony. Their documentary, “That Creative Spark,” won an Emmy award for the Education/Schools category.

A setback to research that offered hope for fibrous dysplasia patients

Promising HSDM research into the rare and debilitating disease was halted due to withdrawal of federal funding. The research had implications for treating a range of skeletal conditions and broader medical applications.
Health

A setback to research that offered hope for fibrous dysplasia patients

Halt to federal funding disrupts study of rare skeletal disease

Heather Denny

HSDM Communications

3 min read

In 2023, the Harvard School of Dental Medicine was awarded a U.S. Department of Defense grant to fund a four-year study of fibrous dysplasia (FD), a severe skeletal disease in which benign tumors cause bone deformities, fractures, and pain. The award aimed to investigate the cellular and molecular underpinnings of the disease, which affects an estimated 1 in 15,000 to 30,000 people and currently has no cure. The research had promise not only for treating FD, but also for finding treatments for conditions affecting military personnel, including blast-induced heterotopic ossification and chronic bone pain.

At the time, the funding was applauded by patients and patient advocacy groups such as FD/MAS Alliance, a nonprofit dedicated to finding evidence-based treatments for Fibrous Dysplasia and McCune-Albright syndrome.

“This funding was more than just a financial award—it was a crucial investment in understanding and eventually treating a devastating disease.”

Adrienne McBride

“This funding was more than just a financial award—it was a crucial investment in understanding and eventually treating a devastating disease,” said Adrienne McBride, executive director of the Alliance. “Advancing research in FD/MAS benefits those living with this rare disease and holds great potential for broader medical applications.”

The mechanisms investigated in FD research have the potential to yield insights relevant for many other diseases causing bone fragility, pain, and fractures. With federal research funding to Harvard now frozen, these insights may never be realized.

“FD patients and their families had been closely following research advances, hoping for novel, effective interventions. The termination of leading-edge projects like this erodes this hope and sends a discouraging signal to those living with an already-overlooked disease,” said Yingzi Yang, professor of Developmental Biology at HSDM, and principal investigator on the grant.

Yingzi Yang.

Photo by Steve Gilbert

Yang and her partners at Massachusetts General Hospital (MGH) had been making progress in the few years since the funding was awarded. While some work continues at MGH, the research based in the Yang Lab at HSDM, which was critical to providing a greater understanding of the disease mutation, has stopped.

“We had made substantial progress in terms of identifying potential treatment targets of this devastating disease based on getting a better understanding of the molecular mechanisms,” said Yang. “Cutting off our study disrupts the holistic understanding of the FD disease and reduces the research rigor and impacts.”

“Cutting off our study disrupts the holistic understanding of the FD disease and reduces the research rigor and impacts.”

Yingzi Yang

“The cancellation of this grant is a significant setback for FD/MAS research and for patients, including military personnel, who rely on scientific progress for hope and support,” said McBride.

FD/MAS can affect every bone in the body, but the largest subpopulation of those with the disease are affected by FD lesions in their craniofacial bones, leading to severe facial deformities. 

HSDM alumnus Christopher H. Fox, DMD87, DMSc91, who leads the American Association for Dental, Oral, and Craniofacial Research (AADOCR), also expressed deep concerns over the implications.

“This funding cut of such promising research is a tragedy for the FD/MAS community and indeed for our country.  Through our advocacy efforts, AADOCR is doing everything we can to reverse these ill-advised decisions,” said Fox.

3 Questions: Measuring the financial impact of design in the built environment

The various aspects of design — such as creation, function, and aesthetic — can be applied to many different disciplines and provide them with a value. While this is universally true for architecture, it has not traditionally been acknowledged for real estate, despite the close association between the two. Traditionally, real estate valuation has been determined by certain sales factors: income generated, recent similar sales, and replacement costs.

Now, a new book by researchers at MIT explores how design can be quantified in real estate valuation. “Value of Design: Creating Agency Through Data-Driven Insights” (Applied Research and Design Publishing) uses data-driven research to reveal how design leaves measurable traces in the built environment that correlate with real economic, social, and environmental outcomes.

The late MIT Research Scientist Andrea Chegut, along with Visiting Instructor Minkoo Kang SMRED ’18, Helena Rong SMArchS ’19, and Juncheng “Tony” Yang SMArchS ’19, present a body of years of interdisciplinary social science research that weave together historical context, real-world case studies, and critical reflections that engage a broader dialogue on design, value, and the built environment.

Kang, Rong, and Yang met as students at the MIT Real Estate Innovation Lab, which was co-founded and directed by Chegut, who passed away in December 2022. Under Chegut’s direction, interdisciplinary research at the lab helped establish the analytical tools and methodologies that underpin the book’s core arguments. The lab formerly closed after Chegut’s passing.

Q: How might the tools used in this research impact how an investor or real estate developer makes decisions on a property?

Kang: This book doesn’t offer a formula for replicable outcomes, nor should it. Real estate is deeply contextual, and every project carries its own constraints and potential. What our research provides is evidence: looking back at 20 years of patterns in New York City data, we see that design components — physical features such as podiums, unique non-orthogonal geometries, and high-rise setbacks; environmental qualities like daylight access, greenery, and open views; and a building’s contextual fit within its neighborhood — has a more substantial and consistent influence on value than the industry tends to credit.

Rong: One reason design has been left out of valuation practice is the siloing of architectural information: drawings stay inside individual firms, and there are no standards for identifying or quantifying the components that make up a design. We have countless databases, but never a true “design database.” This book starts to fill that gap by inventorying architectural features and showing how to measure them with both insights from architectural theory and exploration of computational methods and tools. Using today’s reality-capture technologies and the large-scale transaction data we obtained, we uncovered long-term patterns: Buildings that invested in thoughtful design often performed better, not only in financial terms, but also in how they contributed to neighborhood identity and sustained demand. The takeaway isn’t prescriptive, but directional. Design should not be treated as an aesthetic afterthought, or an intangible variable. Its impact is durable, measurable, and, importantly, undervalued, which is why it is something developers and investors should not only pay attention to, but actively prioritize.

Q: Can you share an example of how design influences urban change?

Kang: As a designer and real estate developer, my work sits at the intersection of architecture, finance, and neighborhood communities. I often collaborate with resident stakeholders to reimagine overlooked or underutilized properties as meaningful, long-term assets — using design both as a tool to shape development strategy and as a medium for community engagement and consensus building.

One recent example involved supporting a longtime property owner in transforming their single-family home into a 40-unit, mixed-income apartment building. Rather than maximizing density at all costs, the project prioritized livability, sustainability, and contextual fit — compact units with generous access to light and air, shared amenities like co-working space and a community room, and passive house-level energy performance.

Through design, we were able to unlock a new housing typology — one that balances financial feasibility with community ownership and long-term affordability. It’s a reminder that design’s influence on urban change extends beyond aesthetics or form. It helps determine who development serves, how neighborhoods evolve, and what kinds of futures are made possible.

Q: How can this research be of use to policymakers?

Yang: Policymakers usually consider broader and longer-term urban outcomes: livability, resilience, equity, and community cohesion. This research provides the empirical foundation to connect those outcomes to concrete design choices.

By quantifying how design influences not just real estate performance, but neighborhood identity, access, and sustainability, the book offers policymakers a new evidence base to inform zoning, public incentives, and regulatory frameworks. But more than that, we think this kind of data-driven insight can help align interests across the ecosystem: urban planners, private developers, community organizations, and residents, by demonstrating that high-quality design delivers shared, long-term value.

In a time when urban space is increasingly contested, being able to point to measurable impacts of design helps shift debates from ideology to informed decision-making. It gives public agencies a firmer ground to demand more, and to build coalitions around the kinds of neighborhoods we want to sustain. Basically, this research helps create agency by making design intelligible in urban spaces where key decisions are made. The kind of agency we’re interested in is not about control, but about influence and authorship. Design shapes how cities function and feel, who they serve, and how they change. Yet too often, those decisions are made without recognizing design’s role. By surfacing how design leaves durable, measurable traces in the built environment, this work gives designers and allied actors a stronger voice in shaping development and public discourse. It also invites broader participation: community groups, resident advocates, and others can use this evidence to articulate why building attributes and environmental quality matter. In this sense, the agency is distributed. It’s not just about empowering designers, but about equipping all stakeholders to see design as a shared, strategic tool for shaping more equitable, resilient, and humane urban futures.

© Image courtesy of the School of Architecture and Planning.

Authors (clockwise from upper left:) Juncheng “Tony” Yang, Helena Rong, Andrea Chegut, and Minkook Kang.

Cambridge researchers play key role in evidence leading to approval of new treatment for hereditary blindness

Man undergoing an eye examination

Leber hereditary optic neuropathy (LHON) affects around 2,500 people in the United Kingdom.  It causes rapidly progressive loss of vision in both eyes. Within weeks of onset, an affected individual reaches the legal threshold to be considered as severely sight impaired (blind).

The condition tends to affect young men, with a peak age of onset between the ages of 15 and 35 years old, but women can also be affected and the loss of vision can occur at any age.  The prognosis is poor, with only around one in 10 affected individuals experiencing some spontaneous visual improvement, which is invariably partial.

LHON is caused by the loss of retinal ganglion cells, specialised nerve cells in the innermost layer of the retina. The projections, or ‘axons’, from these cells converge to form the optic nerve, the cable that transmits visual information from the eye to the brain. Once these retinal ganglion cells are lost, the damage becomes irreversible. LHON is primarily caused by genetic defects within the mitochondrial genome, which is transmitted down the maternal line. 

In 2011, the journal Brain published the results of a landmark randomised placebo-controlled trial of the drug idebenone to treat LHON. The RHODOS trial was led by Patrick Chinnery, at the time a researcher at Newcastle University and now Professor of Neurology at the University of Cambridge. It found some potential benefit in a subgroup of patients. However, treatment with idebenone was only given for six months, and it was not clear whether there was any benefit in treating individuals who had been affected for more than one year.

“At the time, we had only anecdotal evidence that idebenone would work for patients with LHON,” said Professor Chinnery. “Our clinical trial was the first strong evidence that it could help stabilise vision in some patients. It was an important step towards providing a new treatment.”

One of Professor Chinnery’s collaborators on the RHODOS trial was Patrick Yu-Wai-Man, Professor of Ophthalmology at the University of Cambridge, who led the follow-up LEROS trial. This assessed the efficacy and safety of idebenone treatment in patients with LHON up to five years after symptom onset and over a treatment period of 24 months. This second trial found that the drug can help stabilise vision in some patients and, in certain cases, may even lead to improvement when treatment is provided within five years of vision being affected.

These studies provided crucial evidence to support the use of idebenone to treat LHON patients. The drug was licensed for limited use by patients in Scotland, Wales and Northern Ireland and it has now been approved by NICE for use in patients aged 12 years and over in England.

Professor Yu-Wai-Man said: “LHON causes devastating visual loss and it is a life-changing diagnosis for the affected individual and their family. England is now in line with the rest of the United Kingdom with idebenone now available through the NHS. This will come as a great relief to the LHON community in this country bringing hope to those who have experienced significant visual loss from this mitochondrial genetic disorder.”

The development has been welcomed by charities that have been arguing for idebenone to be made available across the UK. A LHON Society spokesperson said: “This is a critical step towards full access to idebenone for patients, that may alleviate some of the impacts of LHON.”

Katie Waller, Head of Patient Programmes at The Lily Foundation, a charity that supports patients affected by mitochondrial diseases, said: “This is a huge win for the mito community and we’re proud to have been a key stakeholder throughout the process. While it isn’t a cure, this treatment offers real potential for patients to preserve or improve vision, giving the chance to regain independence, confidence and a better quality of life.”

Idebenone will not work for everyone, and responses vary from person to person. LHON patients are encouraged to speak with the healthcare professional responsible for their care to understand whether idebenone is the right treatment for them.

The National Institute for Health and Care Excellence (NICE) has today announced the approval of a new treatment for a form of hereditary blindness for use on the NHS in England. Cambridge researchers played a pivotal role in providing the evidence that led to this important development.

This will bring hope to those who have experienced significant visual loss from this mitochondrial genetic disorder
Patrick Yu-Wai-Man
Man undergoing an eye examination

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Astronomers find new evidence for planet around our closest solar twin

Artist's impression of a gas giant orbiting Alpha Centauri A.

Visible only from the Southern hemisphere, the system is made up of the binary Alpha Centauri A and Alpha Centauri B, both Sun-like stars, and the faint red dwarf star Proxima Centauri. Alpha Centauri A is the third brightest star in the night sky.

While there are three confirmed planets orbiting Proxima Centauri, the presence of other worlds surrounding Alpha Centauri A and Alpha Centauri B has proved difficult to confirm, because the stars are so bright, close, and move across the sky quickly.

Now, observations from Webb’s Mid-Infrared Instrument (MIRI) are providing the strongest evidence to date of a gas giant orbiting Alpha Centauri A. The results, from an international team including researchers from the University of Cambridge, have been accepted for publication in two papers in The Astrophysical Journal Letters.

If confirmed, the planet would be the closest to Earth that orbits in the habitable zone of a Sun-like star. However, because the planet candidate is a gas giant, scientists say it would not support life as we know it.

Several rounds of observations by Webb, analysis by the research team, and computer modelling helped determine that the source seen in Webb’s image is likely to be a planet, and not a background object (like a galaxy), a foreground object (a passing asteroid), or another image artefact.

“Webb was designed and optimised to find the most distant galaxies in the universe. The team had to come up with a custom observing sequence just for this target, and their extra effort paid off spectacularly,” said Charles Beichman, NASA’s Jet Propulsion Laboratory and the NASA Exoplanet Science Institute at Caltech, co-first author on the new papers.

The first observations of the system took place in August 2024. While extra brightness from the nearby companion star Alpha Centauri B complicated the analysis, the team was able to subtract out the light from both stars to reveal an object over 10,000 times fainter than Alpha Centauri A, separated from the star by about two times the distance between the Sun and Earth.

While the initial detection was exciting, the research team needed more data to come to a firm conclusion. However, additional observations of the system in February 2025 and April 2025 did not reveal any objects like the one identified in August 2024.

“We were faced with the case of a disappearing planet! To investigate this mystery, we used computer models to simulate millions of potential orbits, incorporating the knowledge gained when we saw the planet, as well as when we did not,” said co-first author Aniket Sanghi of the California Institute of Technology.

In these simulations, the team took into account both the 2019 sighting of a potential exoplanet candidate by the European Southern Observatory’s Very Large Telescope, the new data from Webb, and considered orbits that would be gravitationally stable in the presence of Alpha Centauri B, meaning the planet wouldn’t get flung out of the system.

The researchers say a non-detection in the second and third round of observations with Webb wasn’t surprising.

“We found that in half of the possible orbits simulated, the planet moved too close to the star and wouldn’t have been visible to Webb in both February and April 2025,” said Sanghi.

In addition to these simulations, the Cambridge members of the research team analysed the Webb data to search for any signs of a type of cosmic dust, known as exozodiacal dust, around Alpha Centauri A. This cloud of dust, produced by objects such as comets and asteroids breaking apart, forms a faint, glowing disc around a star.

“Exozodiacal dust helps us learn about the architecture and evolution of planetary systems,” said co-author Professor Mark Wyatt from Cambridge’s Institute of Astronomy. “But it’s also important when searching for rocky planets, since dust in the habitable zone of a star can obscure or mimic planetary signals.”

No dust was detected in these observations, however, the team showed they were sensitive to dust levels an order of magnitude lower than any previous measurement, which could be valuable for future planet searches around this star.

“This observation shows how deeply Webb can probe the dust environment of the nearest Sun-like stars,” said co-author Dr Max Sommer, also from Cambridge’s Institute of Astronomy. “We can now explore exozodiacal dust at levels not much higher than those in our own Solar System, tapping into a whole new way of looking at other star systems.”

Based on the brightness of the planet in the mid-infrared observations and the orbit simulations, the researchers say it could be a gas giant approximately the mass of Saturn orbiting Alpha Centauri A in an elliptical path varying between one to two times the distance between Sun and Earth.

If confirmed, the potential planet seen in the Webb image of Alpha Centauri A would mark a new milestone for exoplanet imaging efforts. Of all the directly imaged exoplanets, this would be the closest to its star seen so far. It’s also the most similar in temperature and age to the giant planets in our solar system, and the nearest to Earth.

“Its very existence in a system of two closely separated stars would challenge our understanding of how planets form, survive, and evolve in chaotic environments,” said Sanghi.

The James Webb Space Telescope is an international programme led by NASA with its partners, ESA (European Space Agency) and CSA (Canadian Space Agency).

Reference:
Charles Beichman, Aniket Sanghi et al. ‘Worlds Next Door: A Candidate Giant Planet Imaged in the Habitable Zone of α Cen A. I. Observations, Orbital and Physical Properties, and Exozodi Upper Limits’. The Astrophysical Journal Letters (in press). arXiv:2508.03812v1

Aniket Sanghi, Charles Beichman et al. ‘Worlds Next Door: A Candidate Giant Planet Imaged in the Habitable Zone of  α Cen A. II. Binary Star Modeling, Planet and Exozodi Search, and Sensitivity Analysis’. The Astrophysical Journal Letters (in press). arXiv:2508.03812

Adapted from a NASA press release.

Astronomers using the NASA/ESA/CSA James Webb Space Telescope have found strong evidence of a giant planet orbiting a star in the stellar system closest to our own Sun. At just four light-years away from Earth, the Alpha Centauri triple star system has long been a target in the search for worlds beyond our solar system.

Artist's impression of a gas giant orbiting Alpha Centauri A.

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Eco-driving measures could significantly reduce vehicle emissions

Any motorist who has ever waited through multiple cycles for a traffic light to turn green knows how annoying signalized intersections can be. But sitting at intersections isn’t just a drag on drivers’ patience — unproductive vehicle idling could contribute as much as 15 percent of the carbon dioxide emissions from U.S. land transportation.

A large-scale modeling study led by MIT researchers reveals that eco-driving measures, which can involve dynamically adjusting vehicle speeds to reduce stopping and excessive acceleration, could significantly reduce those CO2 emissions.

Using a powerful artificial intelligence method called deep reinforcement learning, the researchers conducted an in-depth impact assessment of the factors affecting vehicle emissions in three major U.S. cities.

Their analysis indicates that fully adopting eco-driving measures could cut annual city-wide intersection carbon emissions by 11 to 22 percent, without slowing traffic throughput or affecting vehicle and traffic safety.

Even if only 10 percent of vehicles on the road employ eco-driving, it would result in 25 to 50 percent of the total reduction in CO2 emissions, the researchers found.

In addition, dynamically optimizing speed limits at about 20 percent of intersections provides 70 percent of the total emission benefits. This indicates that eco-driving measures could be implemented gradually while still having measurable, positive impacts on mitigating climate change and improving public health.

Two intersections with lots of cars; the 100% adoption has less traffic.

“Vehicle-based control strategies like eco-driving can move the needle on climate change reduction. We’ve shown here that modern machine-learning tools, like deep reinforcement learning, can accelerate the kinds of analysis that support sociotechnical decision making. This is just the tip of the iceberg,” says senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).

She is joined on the paper by lead author Vindula Jayawardana, an MIT graduate student; as well as MIT graduate students Ao Qu, Cameron Hickert, and Edgar Sanchez; MIT undergraduate Catherine Tang; Baptiste Freydt, a graduate student at ETH Zurich; and Mark Taylor and Blaine Leonard of the Utah Department of Transportation. The research appears in Transportation Research Part C: Emerging Technologies.

A multi-part modeling study

Traffic control measures typically call to mind fixed infrastructure, like stop signs and traffic signals. But as vehicles become more technologically advanced, it presents an opportunity for eco-driving, which is a catch-all term for vehicle-based traffic control measures like the use of dynamic speeds to reduce energy consumption.

In the near term, eco-driving could involve speed guidance in the form of vehicle dashboards or smartphone apps. In the longer term, eco-driving could involve intelligent speed commands that directly control the acceleration of semi-autonomous and fully autonomous vehicles through vehicle-to-infrastructure communication systems.

“Most prior work has focused on how to implement eco-driving. We shifted the frame to consider the question of should we implement eco-driving. If we were to deploy this technology at scale, would it make a difference?” Wu says.

To answer that question, the researchers embarked on a multifaceted modeling study that would take the better part of four years to complete.

They began by identifying 33 factors that influence vehicle emissions, including temperature, road grade, intersection topology, age of the vehicle, traffic demand, vehicle types, driver behavior, traffic signal timing, road geometry, etc.

“One of the biggest challenges was making sure we were diligent and didn’t leave out any major factors,” Wu says.

Then they used data from OpenStreetMap, U.S. geological surveys, and other sources to create digital replicas of more than 6,000 signalized intersections in three cities — Atlanta, San Francisco, and Los Angeles — and simulated more than a million traffic scenarios.

The researchers used deep reinforcement learning to optimize each scenario for eco-driving to achieve the maximum emissions benefits.

Reinforcement learning optimizes the vehicles’ driving behavior through trial-and-error interactions with a high-fidelity traffic simulator, rewarding vehicle behaviors that are more energy-efficient while penalizing those that are not.

The researchers cast the problem as a decentralized cooperative multi-agent control problem, where the vehicles cooperate to achieve overall energy efficiency, even among non-participating vehicles, and they act in a decentralized manner, avoiding the need for costly communication between vehicles.

However, training vehicle behaviors that generalize across diverse intersection traffic scenarios was a major challenge. The researchers observed that some scenarios are more similar to one another than others, such as scenarios with the same number of lanes or the same number of traffic signal phases.

As such, the researchers trained separate reinforcement learning models for different clusters of traffic scenarios, yielding better emission benefits overall.

But even with the help of AI, analyzing citywide traffic at the network level would be so computationally intensive it could take another decade to unravel, Wu says.

Instead, they broke the problem down and solved each eco-driving scenario at the individual intersection level.

“We carefully constrained the impact of eco-driving control at each intersection on neighboring intersections. In this way, we dramatically simplified the problem, which enabled us to perform this analysis at scale, without introducing unknown network effects,” she says.

Significant emissions benefits

When they analyzed the results, the researchers found that full adoption of eco-driving could result in intersection emissions reductions of between 11 and 22 percent.

These benefits differ depending on the layout of a city’s streets. A denser city like San Francisco has less room to implement eco-driving between intersections, offering a possible explanation for reduced emission savings, while Atlanta could see greater benefits given its higher speed limits.

Even if only 10 percent of vehicles employ eco-driving, a city could still realize 25 to 50 percent of the total emissions benefit because of car-following dynamics: Non-eco-driving vehicles would follow controlled eco-driving vehicles as they optimize speed to pass smoothly through intersections, reducing their carbon emissions as well.

In some cases, eco-driving could also increase vehicle throughput by minimizing emissions. However, Wu cautions that increasing throughput could result in more drivers taking to the roads, reducing emissions benefits.

And while their analysis of widely used safety metrics known as surrogate safety measures, such as time to collision, suggest that eco-driving is as safe as human driving, it could cause unexpected behavior in human drivers. More research is needed to fully understand potential safety impacts, Wu says.

Their results also show that eco-driving could provide even greater benefits when combined with alternative transportation decarbonization solutions. For instance, 20 percent eco-driving adoption in San Francisco would cut emission levels by 7 percent, but when combined with the projected adoption of hybrid and electric vehicles, it would cut emissions by 17 percent.

“This is a first attempt to systematically quantify network-wide environmental benefits of eco-driving. This is a great research effort that will serve as a key reference for others to build on in the assessment of eco-driving systems,” says Hesham Rakha, the Samuel L. Pritchard Professor of Engineering at Virginia Tech, who was not involved with this research.

And while the researchers focus on carbon emissions, the benefits are highly correlated with improvements in fuel consumption, energy use, and air quality.

“This is almost a free intervention. We already have smartphones in our cars, and we are rapidly adopting cars with more advanced automation features. For something to scale quickly in practice, it must be relatively simple to implement and shovel-ready. Eco-driving fits that bill,” Wu says.

This work is funded, in part, by Amazon and the Utah Department of Transportation.

© Image: iStock; MIT News

Implementing co-driving techniques can significantly reduce intersection carbon dioxide emissions without impacting traffic throughput or safety, according to new MIT research.

Could lithium explain — and treat — Alzheimer’s?

Health

Could lithium explain — and treat — Alzheimer’s?

One pair of boxes shows fewer green amyloid clusters on the left and more on the right. Another pair of boxes shows a dim arc of purple and red tau on the left and a brighter arc on the right.

In a mouse model of Alzheimer’s disease, lithium deficiency (right) dramatically increased amyloid beta deposits in the brain compared with mice that had normal physiological levels of lithium (left). Bottom row: The same was true for the Alzheimer’s neurofibrillary tangle protein tau.

Yankner Lab

Stephanie Dutchen

HMS Communications

9 min read

Study offers new theory of disease and strategy for fighting it

What is the earliest spark that ignites the memory-robbing march of Alzheimer’s disease? Why do some people with Alzheimer’s-like changes in the brain never go on to develop dementia? These questions have bedeviled neuroscientists for decades.

Now, a team of researchers at Harvard Medical School may have found an answer: lithium deficiency in the brain.

The work, published Wednesday in Nature, shows for the first time that lithium occurs naturally in the brain, shields it from neurodegeneration, and maintains the normal function of all major brain cell types. The findings — 10 years in the making — are based on a series of experiments in mice and on analyses of human brain tissue and blood samples from individuals in various stages of cognitive health.

The scientists found that lithium loss in the human brain is one of the earliest changes leading to Alzheimer’s, while in mice, similar lithium depletion accelerated brain pathology and memory decline. The team further found that reduced lithium levels stemmed from binding to amyloid plaques and impaired uptake in the brain. In a final set of experiments, the team found that a novel lithium compound that avoids capture by amyloid plaques restored memory in mice.

The results unify decades-long observations in patients, providing a new theory of the disease and a new strategy for early diagnosis, prevention, and treatment.

Lithium screening through routine blood tests may one day offer a way to identify at-risk individuals who would benefit from treatment to prevent or delay Alzheimer’s onset.

Affecting an estimated 400 million people worldwide, Alzheimer’s disease involves an array of brain abnormalities — such as clumps of the protein amyloid-beta, neurofibrillary tangles of the protein tau, and loss of a protective protein called REST — but these never explained the full story of the disease. For instance, some people with such abnormalities show no signs of cognitive decline. And recently developed treatments that target amyloid-beta typically don’t reverse memory loss and only modestly reduce the rate of decline.

It’s also clear that genetic and environmental factors affect risk of Alzheimer’s, but scientists haven’t figured out why some people with the same risk factors develop the disease while others don’t.

Lithium, the study authors said, may be a critical missing link.

“The idea that lithium deficiency could be a cause of Alzheimer’s disease is new and suggests a different therapeutic approach,” said senior author Bruce Yankner, professor of genetics and neurology in the Blavatnik Institute at HMS, who in the 1990s was the first to demonstrate that amyloid-beta is toxic.

The study raises hopes that researchers could one day use lithium to treat the disease in its entirety rather than focusing on a single facet such as amyloid-beta or tau, he said.

One of the main discoveries in the study is that as amyloid-beta begins to form deposits in the early stages of dementia in both humans and mouse models, it binds to lithium, reducing lithium’s function in the brain. The lower lithium levels affect all major brain-cell types and, in mice, give rise to changes recapitulating Alzheimer’s disease, including memory loss.

The authors identified a class of lithium compounds that can evade capture by amyloid-beta. Treating mice with the most potent amyloid-evading compound, called lithium orotate, reversed Alzheimer’s disease pathology, prevented brain-cell damage, and restored memory.

Stacked boxes on the left show significantly fewer green amyloid-beta clumps for mice treated with lithium orotate. Stacked boxes on the right show a similar drop in red tau tangles.

Treating mice with the amyloid-evading lithium orotate (top) reduced amyloid beta (left) and tau (right) much more effectively than lithium carbonate (bottom).

Yankner Lab

Although the findings need to be confirmed in humans through clinical trials, they suggest that measuring lithium levels could help screen for early Alzheimer’s. Moreover, the findings point to the importance of testing amyloid-evading lithium compounds for treatment or prevention.

Other lithium compounds are already used to treat bipolar disorder and major depressive disorder, but they are given at much higher concentrations that can be toxic, especially to older people. Yankner’s team found that lithium orotate is effective at one-thousandth that dose — enough to mimic the natural level of lithium in the brain. Mice treated for nearly their entire adult lives showed no evidence of toxicity.

“You have to be careful about extrapolating from mouse models, and you never know until you try it in a controlled human clinical trial,” Yankner said. “But so far the results are very encouraging.”

Lithium depletion is an early sign of Alzheimer’s

Yankner became interested in lithium while using it to study the neuroprotective protein REST. Discovering whether lithium is found in the human brain and whether its levels change as neurodegeneration develops and progresses, however, required access to brain tissue, which generally can’t be accessed in living people.

So the lab partnered with the Rush Memory and Aging Project in Chicago, which has a bank of postmortem brain tissue donated by thousands of study participants across the full spectrum of cognitive health and disease.

Having that range was critical because trying to study the brain in the late stages of Alzheimer’s is like looking at a battlefield after a war, said Yankner; there’s a lot of damage and it’s hard to tell how it all started. But in the early stages, “before the brain is badly damaged, you can get important clues,” he said.

Led by first author Liviu Aron, senior research associate in the Yankner Lab, the team used an advanced type of mass spectroscopy to measure trace levels of about 30 different metals in the brains and blood of cognitively healthy people, those in an early stage of dementia called mild cognitive impairment, and those with advanced Alzheimer’s.

Lithium was the only metal that had markedly different levels across groups and changed at the earliest stages of memory loss. Its levels were high in the cognitively healthy donors but greatly diminished in those with mild impairment or full-blown Alzheimer’s.

A scatter plot of different metals shows one main cluster and then an outlier, labeled “lithium.”
Lithium (upper left) was the only metal that differed significantly between people with and without mild cognitive impairment, often a precursor to Alzheimer’s.

The team replicated the findings in samples obtained from multiple brain banks nationwide.

The observation aligned with previous population studies showing that higher lithium levels in the environment, including in drinking water, tracked with lower rates of dementia.

But the new study went beyond by directly observing lithium in the brains of people who hadn’t received lithium as a treatment, establishing a range that constitutes normal levels, and demonstrating that lithium plays an essential role in brain physiology.

“Lithium turns out to be like other nutrients we get from the environment, such as iron and vitamin C,” Yankner said. “It’s the first time anyone’s shown that lithium exists at a natural level that’s biologically meaningful without giving it as a drug.”

Then Yankner and colleagues took things a step further. They demonstrated in mice that lithium depletion isn’t merely linked to Alzheimer’s disease — it helps drive it.

Loss of lithium causes the range of Alzheimer’s-related changes

The researchers found that feeding healthy mice a lithium-restricted diet brought their brain lithium levels down to a level similar to that in patients with Alzheimer’s disease. This appeared to accelerate the aging process, giving rise to brain inflammation, loss of synaptic connections between neurons, and cognitive decline.

In Alzheimer’s mouse models, depleted lithium dramatically accelerated the formation of amyloid-beta plaques and structures that resemble neurofibrillary tangles. Lithium depletion also activated inflammatory cells in the brain called microglia, impairing their ability to degrade amyloid; caused the loss of synapses, axons, and neuron-protecting myelin; and accelerated cognitive decline and memory loss — all hallmarks of Alzheimer’s disease.

The mouse experiments further revealed that lithium altered the activity of genes known to raise or lower the risk of Alzheimer’s, including the best-known, APOE.

Side-by-side grayscale electron microscopy images show thicker cell borders on the left and thinner borders on the right.

Lithium deficiency thinned the myelin that coats neurons (right) compared to normal mice (left).

Yankner Lab

Replenishing lithium by giving the mice lithium orotate in their water reversed the disease-related damage and restored memory function, even in older mice with advanced disease. Notably, maintaining stable lithium levels in early life prevented Alzheimer’s onset — a finding that confirmed that lithium fuels the disease process.

“What impresses me the most about lithium is the widespread effect it has on the various manifestations of Alzheimer’s. I really have not seen anything quite like it all my years of working on this disease,” said Yankner.

A promising avenue for Alzheimer’s treatment

A few limited clinical trials of lithium for Alzheimer’s disease have shown some efficacy, but the lithium compounds they used — such as the clinical standard, lithium carbonate — can be toxic to aging people at the high doses normally used in the clinic.

The new research explains why: Amyloid-beta was sequestering these other lithium compounds before they could work. Yankner and colleagues found lithium orotate by developing a screening platform that searches a library of compounds for those that might bypass amyloid-beta. Other researchers can now use the platform to seek additional amyloid-evading lithium compounds that might be even more effective.

“One of the most galvanizing findings for us was that there were profound effects at this exquisitely low dose,” Yankner said.

If replicated in further studies, the researchers say lithium screening through routine blood tests may one day offer a way to identify at-risk individuals who would benefit from treatment to prevent or delay Alzheimer’s onset.

Studying lithium levels in people who are resistant to Alzheimer’s as they age might help scientists establish a target level that they could help patients maintain to prevent onset of the disease, Yankner said.

Since lithium has not yet been shown to be safe or effective in protecting against neurodegeneration in humans, Yankner emphasizes that people should not take lithium compounds on their own. But he expressed cautious optimism that lithium orotate or a similar compound will move forward into clinical trials in the near future and could ultimately change the story of Alzheimer’s treatment.

“My hope is that lithium will do something more fundamental than anti-amyloid or anti-tau therapies, not just lessening but reversing cognitive decline and improving patients’ lives,” he said.


This research was supported by the National Institutes of Health.

MIT-Africa launches new collaboration with Angola

The MIT Center for International Studies announced the launch of a new pilot initiative with Angola, to be implemented through its MIT-Africa Program.

The new initiative marks a significant collaboration between MIT-Africa, Sonangol (Angola’s national energy company), and the Instituto Superior Politécnico de Tecnologias e Ciências (ISPTEC). The collaboration was formalized at a signing ceremony on MIT’s campus in June with key stakeholders from all three institutions present, including Diamantino Pedro Azevedo, the Angolan minister of mineral resources, petroleum, and gas, and Sonangol CEO Gaspar Martins.

“This partnership marks a pivotal step in the Angolan government’s commitment to leveraging knowledge as the cornerstone of the country’s economic transformation,” says Azevedo. “By connecting the oil and gas sector with science, innovation, and world-class training, we are equipping future generations to lead Angola into a more technological, sustainable, and globally competitive era.”

The sentiment is shared by the MIT-Africa Program leaders. “This initiative reflects MIT’s deep commitment to fostering meaningful, long-term relationships across the African continent,” says Mai Hassan, faculty director of the MIT-Africa Program. “It supports our mission of advancing knowledge and educating students in ways that are globally informed, and it provides a platform for mutual learning. By working with Angolan partners, we gain new perspectives and opportunities for innovation that benefit both MIT and our collaborators.”

In addition to its new collaboration with MIT-Africa, Sonangol has joined MIT’s Industrial Liaison Program (ILP), breaking new ground as its first corporate member based in sub-Saharan Africa. ILP enables companies worldwide to harness MIT resources to address current challenges and to anticipate future needs. As an ILP member, Sonangol seeks to facilitate collaboration in key sectors such as natural resources and mining, energy, construction, and infrastructure.

The MIT-Africa Program manages a portfolio of research, teaching, and learning initiatives that emphasize two-way value — offering impactful experiences to MIT students and faculty while collaborating closely with institutions and communities across Africa. The new Angola collaboration is aligned with this ethos, and will launch with two core activities during the upcoming academic year:

  1. Global Classroom: An MIT course on geo-spatial technologies for environmental monitoring, taught by an MIT faculty member, will be brought directly to the ISPTEC campus, offering Angolan students and MIT participants a collaborative, in-country learning experience.
  2. Global Teaching Labs: MIT students will travel to ISPTEC to teach science, technology, engineering, arts, and mathematics subjects on renewable energy technologies, engaging Angolan students through hands-on instruction.

“This is not a traditional development project,” says Ari Jacobovits, managing director of MIT-Africa. “This is about building genuine partnerships rooted in academic rigor, innovation, and shared curiosity. The collaboration has been designed from the ground up with our partners at ISPTEC and Sonangol. We’re coming in with a readiness to learn as much as we teach.”

The pilot marks an important first step in establishing a long-term collaboration with Angola. By investing in collaborative education and innovation, the new initiative aims to spark novel approaches to global challenges and strengthen academic institutions on both sides.

These agreements with MIT-Africa and ILP “not only enhance our innovation and technological capabilities, but also create opportunities for sustainable development and operational excellence,” says Gaspar. “They advance our mission to be a leading force in the African energy sector.”

“The vision behind this initiative is bold,” says Hassan. “It’s about co-creating knowledge and building capacity that lasts.”

© Photo courtesy of the Center for International Studies.

A collaboration between MIT-Africa, Sonangol, and the Instituto Superior Politécnico de Tecnologias e Ciências was formalized at a signing ceremony on MIT’s campus, with key stakeholders from all three institutions present.

School of Architecture and Planning welcomes new faculty for 2025

Four new faculty members join the School of Architecture and Planning (SA+P) this fall, offering the MIT community creativity, knowledge, and scholarship in multidisciplinary roles.

“These individuals add considerable strength and depth to our faculty,” says Hashim Sarkis, dean of the School of Architecture and Planning. “We are excited for the academic vigor they bring to research and teaching.”

Karrie G. Karahalios ’94, MEng ’95, SM ’97, PhD ’04 joins the MIT Media Lab as a full professor of media arts and sciences. Karahalios is a pioneer in the exploration of social media and of how people communicate in environments that are increasingly mediated by algorithms that, as she has written, “shape the world around us.” Her work combines computing, systems, artificial intelligence, anthropology, sociology, psychology, game theory, design, and infrastructure studies. Karahalios’ work has received numerous honors including the National Science Foundation CAREER Award, Alfred P. Sloan Research Fellowship, SIGMOD Best Paper Award, and recognition as an ACM Distinguished Member.

Pat Pataranutaporn SM ’20, PhD ’24 joins the MIT Media Lab as an assistant professor of media arts and sciences. A visionary technologist, scientist, and designer, Pataranutaporn explores the frontier of human-AI interaction, inventing and investigating AI systems that support human thriving. His research focuses on how personalized AI systems can amplify human cognition, from learning and decision-making to self-development, reflection, and well-being. Pataranutaporn will co-direct the Advancing Humans with AI Program.

Mariana Popescu joins the Department of Architecture as an assistant professor with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. Popescu is a computational architect and structural designer with a strong interest and experience in innovative ways of approaching the fabrication process and use of materials in construction. Her area of expertise is computational and parametric design, with a focus on digital fabrication and sustainable design. Her extensive involvement in projects related to promoting sustainability has led to a multilateral development of skills, which combine the fields of architecture, engineering, computational design, and digital fabrication. Popescu earned her doctorate at ETH Zurich. She was named a “Pioneer” on the MIT Technology Review global list of “35 innovators under 35” in 2019.

Holly Samuelson joins the Department of Architecture as an associate professor in the Building Technology Program at MIT, teaching architectural technology courses. Her teaching and research focus on issues of building design that impact human and environmental health. Her current projects harness advanced building simulation to investigate issues of greenhouse gas emissions, heat vulnerability, and indoor environmental quality while considering the future of buildings in a changing electricity grid. Samuelson has co-authored over 40 peer-reviewed papers, winning a best paper award from the journal Energy and Building. As a recognized expert in architectural technology, she has been featured in news outlets including The Washington Post, The Boston Globe, the BBC, and The Wall Street Journal. Samuelson earned her doctor of design from Harvard University Graduate School of Design.

© Images courtesy of the School of Architecture and Planning

New MIT faculty for 2025 in the School of Architecture and Planning (clockwise from top left): Karrie Karahalios, Pat Pataranutaporn, Holly Samuelson, and Mariana Popescu

What your credit score says about how, where you were raised

Jamie Fogel

Jamie Fogel.

Niles Singer/Harvard Staff Photographer

Work & Economy

What your credit score says about how, where you were raised

Study looks at national disparities, finds bill-paying habits emerge by early adulthood, influence upward mobility

Christy DeSmith

Harvard Staff Writer

6 min read

A person’s credit report tells a story about their childhood.

New research, released last month by Harvard’s Opportunity Insights, shows that a strong predictor of an adult’s bill-paying habits — the main determinant of credit scores — is the environment in which they grew up. The study, based on a sample of more than 25 million Americans, reveals lifelong differences in repayment behavior emerging by early adulthood according to race, hometown, and socioeconomic class.

These habits proved surprisingly stubborn as individuals moved up and down the socioeconomic ladder.

“It turns out the credit bureaus are able to learn something about us by age 25 that is extremely persistent,” said co-author Jamie Fogel, a research scientist at Opportunity Insights.

“It turns out the credit bureaus are able to learn something about us by age 25 that is extremely persistent.”

Jamie Fogel

A strong credit score, frequently defined as 661 or higher, is a key tool for economic advancement. It means greater access to loans at lower interest rates for education, cars, homes, or starting businesses.

A solid rating can also open other doors.

“Credit scores are also used to screen job applicants, renters, and even people looking to buy insurance,” Fogel noted. “So lacking a good score can shut down multiple opportunities all at once.”

Fogel and his co-authors set out to take an ambitious, population-wide look at disparities in access to credit and the financial management skills that make affordable borrowing possible. Anonymized records from a major credit bureau were linked with U.S. Census and tax data on roughly 1 percent of U.S. residents.

“We were able to get a representative sample while simultaneously zooming in on particular cohorts,” Fogel explained.

For people born between 1978 and 1985, parental data was also incorporated. “That means we were able to look at people’s parents’ income as well as where they grew up,” Fogel said. “Both turned out to be pretty important.”

Credit bureaus’ scoring algorithms, designed to predict the likelihood of default, are based solely on recent repayment history. The bureaus are legally prohibited from incorporating information on race, age, income, and location. But a growing body of evidence finds that demographic disparities still persist.

OI’s new study, with its big-data approach, yields powerful new insights. By age 25, the researchers found, Americans whose parents were in the lowest 20 percent of earners have an average credit score of 615. Those whose parents were in the top 20 percent averaged 725.

“Your parents’ credit score is extremely predictive of your own repayment,” Fogel noted.

“Your parents’ credit score is extremely predictive of your own repayment.”

Jamie Fogel

Also at 25, Black Americans average credit scores that are nearly 100 points lower than white Americans and 140 points lower than Asian Americans.

What’s more, these disparities looked “almost identical” at age 65, Fogel said.

Controlling for income, by looking only at those from the lowest 25th percentile of parental earnings, revealed a still prominent 69-point gap between Black and white individuals. And the average credit score for Black Americans from the top 90th percentile of parental earnings is similar to whites with low-income backgrounds. 

There are almost certainly racial disparities in job stability, he added. “But we can restrict to people who are continuously employed at the same firm, with not too much income volatility. These gaps persist even then.”

Geographic patterns proved equally striking, suggesting that children absorb personal financial lessons from their broader community as well as from parents.

Those from the Upper Midwest, New England, and certain areas of the western U.S. average the highest credit scores and therefore benefit from lower interest rates. People from Appalachia and certain parts of the South have lower scores, with unmet borrowing needs.

A set of more granular analyses revealed hyper-local differences. The country’s highest overall credit scores (an average of 724) were found in Bergen County, New Jersey, just across the Hudson River from New York City. Baltimore averaged nearly 100 points lower as the locale with the country’s lowest scores.

A separate analysis, focusing exclusively on Americans who grew up in low-income families, confirmed the influence of place on repayment behaviors. In Brooklyn, white Americans from low-income families had the highest average scores (719) whereas individuals with similar backgrounds in the Indianapolis area saw the lowest averages (629).

Also illuminating were patterns observed in people who moved from a place like Brooklyn to a place like Indianapolis, or vice versa. Those who relocated in early childhood appeared more likely to absorb the debt-paying habits of their adopted community. But moving as a teenager meant retaining more influences from the birthplace.

“We don’t know exactly what it is,” Fogel said, “but there really is something you’re getting from your community that has a strong effect on your repayment behavior.”

In the paper, the co-authors also review possible explanations. For example, previous research documents the long-term behavioral effects of historic economic traumas, with the 1921 Tulsa Race Massacre offered as one example.

OI’s data also show that Black Americans and those from low-repayment areas are more likely to float cash to family and friends, with Black Americans also less likely than white Americans to receive assistance from parents. In fact, Black Americans are more likely to be the ones helping their elders.

Correlations with previous OI findings are especially suggestive. The geographic patterns of repayment, newly incorporated into OI’s online Opportunity Atlas, mirror previous work documenting regional and racial variances in access to the American Dream.

“Places that promote repayment are the exact same places that promote upward mobility.”

Jamie Fogel

“Places that promote repayment are the exact same places that promote upward mobility,” Fogel observed. “We can see these places promoting repayment even when controlling for income.”

The co-authors don’t see an easy fix, noting that the current credit-scoring system understates repayment gaps by race, geography, and class. More accurate measures would likely exacerbate disparities, they wrote.

Instead, the OI team called for more social scientists to examine how race and childhood environment shape financial management skills for life.

“If we want to improve access to credit,” Fogel said, “we really need to understand what’s happening before people’s 25th birthday.”

Professor Emeritus Peter Temin, influential and prolific economic historian, dies at 87

Peter Temin PhD ’64, the MIT Elisha Gray II Professor of Economics, emeritus, passed away on Aug. 4. He was 87. 

Temin was a preeminent economic historian whose work spanned a remarkable range of topics, from the British Industrial Revolution and Roman economic history to the causes of the Great Depression and, later in his career, the decline of the American middle class. He also made important contributions to modernizing the field of economic history through his systematic use of economic theory and data analysis.

“Peter was a dedicated teacher and a wonderful colleague, who could bring economic history to life like few before or since,” says Jonathan Gruber, Ford Professor and chair of the Department of Economics. “As an undergraduate at MIT, I knew Peter as an engaging teacher and UROP [Undergraduate Research Opportunities Program] supervisor. Later, as a faculty member, I knew him as a steady and supportive colleague. A great person to talk to about everything, from research to politics to life at the Cape. Peter was the full package: a great scholar, a great teacher, and a dedicated public goods provider.”

When Temin began his career, the field of economic history was undergoing a reorientation within the profession. Led by giants like Paul Samuelson and Robert Solow, economics had become a more quantitative, mathematically rigorous discipline, and economic historians responded by embracing the new tools of economic theory and data collection. This “new economic history” (today also known as “cliometrics”) revolutionized the field by introducing statistical analysis and mathematical modeling to the study of the past. Temin was a pioneer of this new approach, using econometrics to reexamine key historical events and demonstrate how data analysis could lead to the overturning of long-held assumptions.

A prolific scholar who authored 17 books and edited six, Temin made important contributions to an incredibly diverse set of topics. “As kindly as he was brilliant, Peter was a unique type of academic,” says Harvard University Professor Claudia Goldin, a fellow economic historian and winner of the 2023 Nobel Prize in economic sciences. “He was a macroeconomist and an economic historian who later worked on today’s social problems. In between, he studied antitrust, health care, and the Roman economy.”

Temin’s earliest work focused on American industrial development during the 19th century and honed the signature approach that quickly made him a leading economic historian — combining rigorous economic theory with a deep understanding of historical context to reexamine the past. Temin was known for his extensive analysis of the Great Depression, which often challenged prevailing wisdom. By arguing that factors beyond monetary policy — including the gold standard and a decline in consumer spending — were critical drivers of the crisis, Temin helped recast how economists think about the catastrophe and the role of monetary policy in economic downturns.

As his career progressed, Temin’s work increasingly expanded to include the economic history of other regions and periods. His later work on the Great Depression placed a greater emphasis on the international context of the crisis, and he made significant contributions to our understanding of the drivers of the British Industrial Revolution and the nature of the Roman economy.

“Peter Temin was a giant in the field of economic history, with work touching every aspect of the field and original ideas backed by careful research,” says Daron Acemoglu, Institute Professor and recipient of the 2024 Nobel Prize in economics. “He challenged the modern view of the Industrial Revolution that emphasized technological changes in a few industries, pointing instead to a broader transformation of the British economy. He took on the famous historian of the ancient world, Moses Finley, arguing that slavery notwithstanding, markets in the Roman economy — especially land markets — worked. Peter’s influence and contributions have been long-lasting and will continue to be so.”

Temin was born in Philadelphia in 1937. His parents were activists who emphasized social responsibility, and his older brother, Howard, became a geneticist and virologist who shared the 1975 Nobel Prize in medicine. Temin received his BA from Swarthmore College in 1959 and went on to earn his PhD in Economics from MIT in 1964. He was a junior fellow of Harvard University’s Society of Fellows from 1962 to 1965.

Temin started his career as an assistant professor of industrial history at the MIT Sloan School of Management before being hired by the Department of Economics in 1967. He served as department chair from 1990t o 1993 and held the Elisha Gray II professorship from 1993 to 2009. Temin won a Guggenheim Fellowship in 2001, and served as president of the Economic History Association (1995-96) and the Eastern Economic Association (2001-02).

At MIT, Temin’s scholarly achievements were matched by a deep commitment to engaging students as a teacher and advisor. “As a researcher, Peter was able to zero in on the key questions around a topic and find answers where others had been flailing,” says Christina Romer, chair of the Council of Economic Advisers under President Obama and a former student and advisee. “As a teacher, he managed to draw sleepy students into a rousing discussion that made us think we had figured out the material on our own, when, in fact, he had been masterfully guiding us. And as a mentor, he was unfailingly supportive and generous with both his time and his vast knowledge of economic history. I feel blessed to have been one of his students.”

When he became the economics department head in 1990, Temin prioritized hiring newly-minted PhDs and other junior faculty. This foresight continues to pay dividends — his junior hires included Daron Acemoglu and Abhijit Banerjee, and he launched the recruiting of Bengt Holmström for a senior faculty position. All three went on to win Nobel Prizes and have been pillars of economics research and education at MIT.

Temin remained an active researcher and author after his retirement in 2009. Much of his later work turned toward the contemporary American economy and its deep-seated divisions. In his influential 2017 book, “The Vanishing Middle Class: Prejudice and Power in a Dual Economy,” he argued that the United States had become a “dual economy,” with a prosperous finance, technology, and electronics sector on one hand and, on the other, a low-wage sector characterized by stagnant opportunity.

“There are echoes of Temin’s later writings in current department initiatives, such as the Stone Center on Inequality and Shaping the Future of Work” notes Gruber. “Temin was in many ways ahead of the curve in treating inequality as an issue of central importance for our discipline.”

In “The Vanishing Middle Class,” Temin also explored the role that historical events, particularly the legacy of slavery and its aftermath, played in creating and perpetuating economic divides. He further explored these themes in his last book, “Never Together: The Economic History of a Segregated America,” published in 2022. While Temin was perhaps best known for his work applying modern economic tools to the past, this later work showed that he was no less adept at the inverse: using historical analysis to shed light on modern economic problems.

Temin was active with MIT Hillel throughout his career, and outside the Institute, he enjoyed staying active. He could often be seen walking or biking to MIT, and taking a walk around Jamaica Pond was a favorite activity in his last few months of life. Peter and his late wife Charlotte were also avid travelers and art collectors. He was a wonderful husband, father, and grandfather, who was deeply devoted to his family.

Temin is lovingly remembered by his daughter Elizabeth “Liz” Temin and three grandsons, Colin and Zachary Gibbons and Elijah Mendez. He was preceded in death by his wife, Charlotte Temin, a psychologist and educator, and his daughter, Melanie Temin Mendez.

Peter Temin was a prolific scholar who authored 17 books and edited six.

Cicadas sing in perfect sync with pre-dawn light

Close-up of a cicada

In a study published in the journal Physical Review E, researchers have found that these insects begin their loud daily serenades when the sun is precisely 3.8 degrees below the horizon: a consistent marker of early morning light known as civil twilight.

The research, carried out by scientists from India, the UK and Israel, analysed several weeks of field recordings taken at two locations near Bangalore in India. Using tools from physics typically applied to the study of phase transitions in materials, the team uncovered a regularity in how cicadas respond to subtle changes in light.

“We’ve long known that animals respond to sunrise and seasonal light changes,” said co-author Professor Raymond Goldstein, from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “But this is the first time we’ve been able to quantify how precisely cicadas tune in to a very specific light intensity — and it’s astonishing.”

The crescendo of cicada song — familiar to anyone who has woken up early on a spring or summer morning — takes only about 60 seconds to build, the researchers found. Each day, the midpoint of that build-up occurs at nearly the same solar angle, regardless of the exact time of sunrise.

In practical terms, that means cicadas begin singing when the light on the ground has reached a specific threshold, varying by just 25% during that brief transition.

To explain this level of precision, the team developed a mathematical model inspired by magnetic materials, in which individual units, or spins, align with an external field and with each other. Similarly, their model proposes that cicadas make decisions based both on ambient light and the sounds of nearby insects, like individuals in an audience who start clapping when others do.

“This kind of collective decision-making shows how local interactions between individuals can produce surprisingly coordinated group behaviour,” said co-author Professor Nir Gov from the Weizmann Institute, who is currently on sabbatical in Cambridge.

The field recordings were made by Bangalore-based engineer Rakesh Khanna, who carries out cicada research as a passion project. Khanna collaborated with Goldstein and Dr Adriana Pesci at Cambridge’s Department of Applied Mathematics and Theoretical Physics.

“Rakesh’s observations have paved the way to a quantitative understanding of this fascinating type of collective behaviour,” said Goldstein. “There’s still much to learn, but this study offers key insights into how groups make decisions based on shared environmental cues.”

The study was partly supported by the Complex Systems Fund at the University of Cambridge. Raymond Goldstein is the Alan Turing Professor of Complex Physical Systems and a Fellow of Churchill College, Cambridge.

Reference:
Khanna, RA, Goldstein, RE, Pesci, AI, and Gov, NS. ‘Photometric Decision-Making During the Dawn Choruses of Cicadas.’ Physical Review E (2025). DOI: 10.1103/4y4d-p32q

Cicadas coordinate their early morning choruses with remarkable precision, timing their singing to a specific level of light during the pre-dawn hours.

Annual cicada

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

NHS Active 10 walking tracker users are more active after using the app

People walking beside a graffitied wall

In a study published in npj Digital Medicine, the researchers found that while activity levels then slowly declined over time, even after 30 months those users who were still using the app were more active than they had been beforehand.

Lack of physical activity is linked to poor health, including higher rates of heart disease and stroke, type 2 diabetes, cancers, dementia, depression and early death. Almost 4 million premature deaths per year – and healthcare costs of US$27 billion (more than £20 billion) – are attributable to physical inactivity.

In England, more than one in three (37%) adults do not reach the recommended 150 minutes per week of moderate-intensity activity – which can include brisk walking – and around one in four (26%) adults does less than 30 minutes per week.

Recently, mobile health apps have grown in popularity, allowing users to track their physical activity, offering tailored feedback, goal setting opportunities and activity reminders throughout the day. One such app is NHS Active 10, launched in 2017 to increase brisk walking levels, as walking is the most common form of activity reported by English adults. The app has been downloaded over 1.5 million times since its introduction.

In the first formal evaluation of its effectiveness, researchers from the University of Cambridge examined anonymised data from more than 200,000 users of the app – those who used the app for at least a month – collected between July 2021 and January 2024. These users had agreed for their anonymised data to be collected and shared for research purposes.

Three quarters of those users who provided demographic information were women, and the average age of users was 51 years. One in three users (32%) was aged 60 years or over.

Following download, the app requested permission from users to access their historical walking data. This revealed that prior to using the app, individuals spent on average 12.3 minutes per day in brisk walking and 30.4 minutes per day in non-brisk walking.

On the first day the app was downloaded, users walked on average an additional 9.0 minutes per day briskly. Their non-brisk walking increased by 2.6 minutes per day.

Over time, the amount of brisk walking done by users declined, falling on average 0.15 minutes per day for each month that passed. The amount of non-brisk walking also fell, by 0.06 minutes per day for each month that passed.

Over a third of users (35%) were still using the app after six months and a fifth (21%) after a year. This is much higher than the average for health and fitness apps worldwide, where typically fewer than three in 100 users (2.8%) are still using the app after 30 days.

At the end of 30 months, users were still walking an average of 4.5 minutes more per day briskly and 0.8 minutes per day more non-briskly than before they began using the app.

First author Dr Dharani Yerrakalva, from the Department of Public Health and Primary Care at the University of Cambridge, said: “Even though activity levels fell over time, people still using the app after more than two years were doing more physical activity than before they started using it.

“At the population level, other research has suggested that we would see significant health benefits from even modest increases in activity such as this. Previous work by colleagues at Cambridge suggests that just 11 minutes a day of brisk walking could prevent one in 10 premature deaths.”

Senior author Professor Simon Griffin, from the Department of Public Health and Primary Care and  the Medical Research Council Epidemiology Unit at the University of Cambridge, said: “Active 10 appears to have a been a success, in that it encouraged 200,000 people to increase their levels of moderate physical activity. We should now consider whether apps such as this can be integrated into NHS practice, for example providing data to GPs so they can monitor their patients’ progress and provide tailored advice, to help us move towards a more personalised approach to medicine.”         

Simon Willcock, aged 71, said: “I am a big fan of Active 10. Following a successful cardiac ablation in January 2023, I set out to get fit and start looking after my heart. Active 10 has enabled me to change my behaviour especially when walking my two dogs twice a day. I now consciously walk faster and rarely amble.

“I set myself a minimum of three Active 10s a day and usually manage four or five. My measured ‘brisk minutes’ recorded usually average 40 to 55% of my total walking. I have lost over a stone [6kg], feel fitter and are rarely out of breath when walking the Surrey Hills near where I live. Much cheaper than going to the gym!”

The research was funded by the National Institute for Health and Care Research (NIHR) and Medical Research Council, with support from the NIHR Cambridge Biomedical Research Centre.

Reference

Yerrakalva, D et al. Evaluation of the NHS Active 10 Walking App Intervention through time-series analysis in 201,688 individuals. npj Digital Medicine; 6 Aug 2025; DOI: 10.1038/s41746-025-01785-x

Users of the NHS Active 10 app, designed to encourage people to become more active, immediately increased their amount of brisk and non-brisk walking upon using the app, according to researchers from the University of Cambridge.

Even though activity levels fell over time, people still using the app after more than two years were doing more physical activity than before they started using it
Dharani Yerrakalva
People walking beside a graffitied wall
“It's right there in my face if I've been lazy!” – Sonali Shukla

Sonali Shukla is a careers consultant at the University of Cambridge. Living in Cambridge, she was used to cycling to work, but when her daughter was born, she found that a combination of looking after her and the recent Covid lockdowns meant she had become less active.

“I started using the NHS Active 10 app around six months after my daughter was born,” she says. “I was looking for ways to get a bit more active. I was intrigued because I've used the step counter on my phone, but what was interesting about this one is that it tracks your brisk walking.”

Sonali initially downloaded the app out of curiosity so see whether or not she walked briskly, but then found herself hooked, motivated by the trophies and celebrations it gave when she completed 10 minutes of brisk walking.

She found the results illuminating as it highlighted the impact her daughter had on her physical activity levels, even when she thought she was getting enough exercise. “I might go for an hour long walk, but when I've got small children in tow, it's too leisurely to really count as proper exercise.”

Even now, three years later, she still uses the app. “The version that I have on my phone has a little tracker that you don't have to log into the app to see. It tracks your brisk walking on the face of your phone. So it's right there in my face if I've been lazy!”

Sonali has managed to keep active, and although the app isn’t the only reason why, she says it certainly helps.

“When the weather's bad and it's not as easy to just go for a walk, I might notice that it's been a couple of days before I've really moved. It encourages me to go outside and get moving.”

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Foundation for U.S. breakthroughs feels shakier to researchers

Nation & World

Foundation for U.S. breakthroughs feels shakier to researchers

A scientist uses a microscope in a lab.

Max Larkin

Harvard Staff Writer

6 min read

Funding cuts seen as threat to nation’s status as driver of scientific progress

With each dollar of its grants, the National Institutes of Health — the world’s largest funder of biomedical research — generates, on average, $2.56 worth of economic activity across all 50 states.

The awards yield new drugs, like the naloxone spray used to prevent opioid overdoses, and breakthroughs in basic science, like the link between cholesterol and heart health.

But NIH grants also support more than 400,000 U.S. jobs, and have been a central force in establishing the country’s dominance in medical research. A recent survey by Nature found that, in health sciences, American research output is larger than that of the next 10 leading countries combined.

And that’s in large part due to federal government support of research conducted by universities. According to data from the Organisation for Economic Co-operation and Development, over the course of the past three decades, those universities have become, as a sector, the largest hub of nonbusiness research in the world.

bar-chart-race visualization

Waves of grant terminations under the second Trump administration have thrown that relationship into doubt — and posed particular threats to certain kinds of research. Harvard has challenged the terminations in federal court. And in July officials confirmed they will provide 80 percent of expected expenses so that most defunded research inside the University can continue temporarily.

But that doesn’t protect researchers from the anxiety that comes with what could be a life-altering jolt. Another concern is lost time. Most of the affected grants support projects that touch many human lives. Disruptions have consequences.

Walter Willett, the Fredrick John Stare Professor of Epidemiology and Nutrition and — by one count — Harvard’s single most-cited scholar, worries about the maintenance of biobanks whose samples can date back 45 years.

The longitudinal studies behind these samples, including research conducted at Harvard and in Washington, generate insights by following populations over long periods of time. So an ill-timed loss of funding can leave an irremediable gap in the dataset or a question mark in place of a finding.

As his grants dried up in May, Willett and his team started “scrambling to try to protect the samples and the data we have”: freezers full of blood samples, DNA, and other biological material. Willett confirmed that those samples are safe this summer, thanks to the University’s stopgap funding. “But we still don’t have long-term solutions,” he added.

Of her four canceled grants, Molly Franke, an epidemiologist and professor of global health and social medicine at Harvard Medical School, worried most about a five-year randomized trial following roughly 160 teens and young adults living with HIV in Peru. The study tests a community-based support intervention that includes mental health support and healthcare liaisons who help them sign up for insurance, get government IDs, and enter treatment.

After the grant was canceled, that network of support was at risk of disappearing. “It was devastating,” she said. “These young people are often in very precarious social situations: Sometimes they don’t have adults in their lives; they’re struggling with mental health issues, substance abuse, or extreme poverty.”

Once University administrators committed to maintaining funding on a temporary basis, researchers breathed a small sigh of relief.

But Franke will still have to look for other backers to make sure the Peru study can be brought to a satisfactory conclusion. Her team tries to lighten the toll of disease in far-flung places because they believe “it’s the right thing to do,” she said. But the work is far from irrelevant to Americans, she noted.

“Infectious diseases know no borders,” Franke said. “And when we get drug-resistant tuberculosis in this country, we know how to treat it because of studies conducted elsewhere.”

sankey visualization

In the spring of 2024, Kelsey Tyssowski — a research associate in organismic and evolutionary biology — received a grant of $130,255 through the NIH’s BRAIN Initiative for her work on the nervous systems of deer mice, in the hopes that it might shed light on ALS and other neurodegenerative disorders.  (That may sound like a stretch, Tyssowski acknowledged, before pointing out that “skilled movement is the thing that people lose first with a lot of diseases.”)

But, as with nearly all other government grants to Harvard, those funds were finally revoked in early May.

Across 15 years in labs, Tyssowski said she’s been funded by government money “more often than not.” Her latest grant was supposed to serve as a bridge between her postdoc in the lab of Hopi Hoekstra and a tenure-track job, and a dedicated lab, probably on another campus.

“I may be the only person studying skilled movement, from this angle, right?” she said with a laugh. “I’d like to start my own lab, and train other people to do this. And if I can’t do that, all of the money and time and energy that’s gone into getting me to this point will have been almost completely wasted.”

Similar stories are playing out across Greater Boston and elsewhere in the nation’s research hubs. Grant data from the NIH shows that affected researchers at Harvard were working across a variety of medical frontiers, from cancer immunotherapy and stem cells to environmental health.

But researchers also stress that their work is not limited to labs on campus or in local hospitals.

At Harvard Medical School, the termination of 350 grants — totaling $230 million in annual funding — has also entailed the cancellation of over 100 “sub-awards.” Those are funds that pass through to partner institutions — in Harvard’s case, in 23 states and Washington, D.C. — that might have better access to animal species or lab resources.

Jonathan Abraham, associate professor of microbiology at HMS, won a grant to analyze mosquitos en route to a better understanding of Eastern equine encephalitis, or EEE. And it came with a sub-award for the University of Texas Medical Branch, as the world’s largest depository of insect-borne viruses.

Meanwhile, Stephanie Mohr won a similar sub-award for a team at the University of Maryland School of Medicine, for a study of tick biology that hoped to shed light on Lyme disease. They were just a few months into a five-year grant when the termination hit.

The same goes for Franke’s study of HIV in youth, which involved a sub-award to the Peruvian branch of Partners In Health.

That study involved, she said, a commitment not just to patients but to the staff paid to care for them and to Peru’s Ministry of Health. The collapse of one grant had ripples of risk, even thousands of miles away.

“It affects the care, people’s livelihoods … and a trust that had taken 20 years to build,” Franke said. “That was what kept me up at night.”

New carbon material sharpens proton beams, potentially boosting cancer treatment precision

Researchers from the National University of Singapore (NUS) have developed a groundbreaking carbon membrane that could revolutionise proton therapy for cancer patients, and advance technologies in medicine and other areas such as energy devices and flexible electronics.

The new carbon material which is just a single atom thick shows incredible promise in enabling high-precision proton beams. Such beams are key to safer and more accurate proton therapy for cancer treatment. The new material, called the ultra-clean monolayer amorphous carbon (UC-MAC), could outperform best in class materials like graphene or commercial carbon films. 

The research was led by Associate Professor Lu Jiong and his team from the NUS Department of Chemistry, in collaboration with international partners.

Beyond graphene: A new class of 2D carbon

Unlike graphene, which features a perfectly ordered honeycomb structure of hexagonal rings, UC-MAC is made up of a complex mix of five-, six-, and seven-membered carbon rings arranged in a disordered, ultra-thin sheet.

This atomic-level disorder is an advantage as it gives rise to angstrom-scale pores, which are just one ten-billionth of a metre wide, that can be finely tuned to control the behaviour of tiny particles like protons and molecular hydrogen ions (H2+) as they pass through. The material’s porous and ultra-thin nature makes it ideal for filtering and splitting subatomic particles, a critical need in several high-tech applications.

Faster, cleaner, and scalable manufacturing

One of the biggest hurdles in using this material for real-world applications is the challenge of manufacturing it. Existing methods are slow, costly, and often introduce metal impurities that compromise the material’s performance.

To solve this, the researchers developed a new industry-compatible synthesis process called the disorder-to-disorder (DTD) approach. Using a special type of plasma-enhanced chemical vapor deposition (ICP-CVD), they were able to grow an 8-inch UC-MAC sheet in seconds, much faster than previous methods, without any detectable metal contamination.

This is a major step forward in scaling up the production of this advanced material for industrial and medical use.

This achievement was made possible through close collaboration between synthetic chemists, materials scientists, and theoretical physicists, including Professor Zeng Xiao Cheng from City University of Hong Kong, Assistant Professor Zhao Xiaoxu from Peking University, Associate Professor Thomas Osipowicz from NUS Department of Physics, and other contributing authors.

The research breakthrough was published in the scientific journal Nature Nanotechnology on 28 July 2025.

Sharper proton beams for safer and more effective treatment

When used as a membrane to split molecular hydrogen ions (H2+) into individual protons, UC-MAC produced proton beams that were significantly sharper than those generated using graphene or traditional carbon films. In fact, the new material reduced unwanted proton scattering events by about twice as much as graphene and 40 times more than commercial carbon thin films.

This is especially important for non-invasive cancer treatments such as proton therapy, where focused beams are used to target and destroy tumours while sparing healthy tissue. Thinner membranes with minimal scattering could help clinicians better control the beam current and direction, making treatment safer and more effective.

A versatile platform for future technologies

While the immediate spotlight is on proton therapy, UC-MAC has potential far beyond medicine. Its ultra-clean, porous structure could be useful for many applications including energy devices such as fuel cells and batteries, catalysis where precise molecular separation is key, and flexible electronics.

“The semiconducting properties of UC-MAC films also make them promising candidates for ultra-thin electronics, particularly for sub-2 nm integrated circuits—a critical frontier in the post-Moore’s law era,” said Assoc Prof Lu.

A step toward real-world use

By demonstrating a fast, scalable, and clean method to produce UC-MAC, the research team has paved the way for transitioning this powerful new material from the lab to real-world applications.

This advancement is not just a step forward in materials science, but a leap towards practical, life-saving technologies that are thinner, faster, and more precise than ever.

Innovation must remain human-centred in the age of generative AI: WIPO chief at NUS Law conference

In a landmark gathering of leading legal scholars, policymakers, and industry experts from around the world, World Intellectual Property Organization (WIPO) Director General Mr Daren Tang delivered a keynote address calling for human creativity to remain at the core as intellectual property (IP) laws evolve and adapt to fast-evolving technologies.

“Generative AI has evolved quickly yet remains a skilful replicator, lacking the real spark of originality and inventiveness that characterises human innovation and creativity. We should therefore see Gen AI as a tool, and like any tool, ensure that it is used for good,” said Mr Tang. “Ingenuity, invention and creativity is a fundamental part of who we are as a human species, and technology, as well as the IP system, must continue to protect, nurture and support this, never forgetting to put the human creator at its centre.”

Mr Tang’s call set the tone for the two-day “Intellectual Property and Technology in the 21st Century” conference held on 4 Aug and 5 Aug 2025 where more than 100 participants from over a dozen countries representing academia, government agencies, industry and the legal profession will engage in deep discussion on pressing IP challenges arising from rapid technological advancement.

Organised by the Centre for Technology, Robotics, Artificial Intelligence & the Law (TRAIL) and the EW Barker Centre for Law & Business at NUS Faculty of Law (NUS Law), the conference is co-hosted with law schools from Columbia, Oxford, and Tsinghua, marking the first-ever academic collaboration of its kind across these leading global institutions. The conference is also supported by partners such as Google, Bytedance, the Singapore Academy of Law and the Intellectual Property Office of Singapore (IPOS).

Keeping pace with AI, creativity and global competition

The challenges facing IP law in the age of AI are far-reaching, going beyond legal doctrine into sectors as diverse as technology, entertainment, and fashion, prompting a rethink of how businesses innovate and compete, to how creators, artists, and designers protect their work.

Mr Adam Williams, Chief Executive of the UK Intellectual Property Office (UK IPO), said: “IP rights give creators, inventors, and investors the confidence to turn their ideas into reality, realise new opportunities and adapt to challenges. Ongoing dialogue between IP offices, and with industry and practitioners, is key to ensuring global IP frameworks remain fit for the future to encourage new discoveries and creations to thrive.”

In July this year, the UK IPO launched a public consultation on the proposed Standard Essential Patents measures to support the UK’s technology-driven economic growth.

"AI is a great economic opportunity but a key issue is its potential to disrupt the livelihoods of many," said Mr Tan Kong Hwee, Chief Executive of IPOS. "Governments, enterprises, and society must work in tandem to find the right balance that adopts a human-centric approach to ensure protection for IP owners' rights whilst facilitating innovation in a responsible and ethical way. At the end of the day, we must remember that people are at the heart of it all; that is why IPOS has committed to helping creators and innovators understand their IP rights as they navigate this fast-evolving technology."

Mr Williams and Mr Tan were part of a roundtable amongst heads of IP offices along with Mr Tang, following his opening keynote address at the conference. The roundtable was moderated by TRAIL Co-Director, Professor David Tan from NUS Law.

Prof Tan, a pioneer and expert in entertainment law and fashion law, said while policy makers are working to reform IP protection, creative industries have turned current limitations into opportunities. The global fashion industry, including Singapore, is a prime example of IP’s ‘negative space’ in which creation and innovation can thrive without significant protection from intellectual property law.

He said, “In Singapore, trademark, patent, and design laws give strong protection for logos, inventions, and product designs. But when it comes to copyright, especially in fashion, the protection is weaker. “Knockoffs” often copy the look and feel of an original without directly copying logos or breaking the law. With the rise of social media, the internet, and AI tools, more people now have the ability to remix and build on existing designs. As a result, success in the industry is less about having exclusive rights, and more about setting trends, building a strong brand, and earning customer loyalty.”

Highlighting opportunities, Prof Tan echoed sentiments recently shared by Prime Minister Lawrence Wong that Singapore can get ahead of new technology like AI to create new jobs[1].

“More designers can use generative AI to help them more quickly create 2D and 3D designs from packaging to clothing and furniture, and small businesses and budding entrepreneurs in particular can get their products to market a lot faster,” he said.

Rethinking Responsibility and Creativity in the Age of Generative AI

At the conference, over thirty presentations by experts in various fields of practice and research will unpack how emerging technologies continue to raise urgent questions that affect not just legal systems, but the public’s rights, safety, and creative freedom.

In recent years, powerful AI tools that can create text, images, or music have made courts and lawmakers think hard about some big copyright questions—like whether something made by AI can be protected by copyright if there wasn’t enough human effort involved.

Associate Professor He Tianxiang from City University of Hong Kong said, “It is the mind of the human creator, fallible and inspired, that copyright law was built to protect and incentivise. The courts must remain clear-eyed and perhaps even sceptical when presented with AI-generated content, whether text, image or music, cloaked with a thin veneer of human input.”

Prof He says the burden should be on the claimant to prove their authorship, not on the public to disprove it.

“Generative AI challenges us to reaffirm what copyright is meant to protect: not merely the existence of a text or image, but the fact that a human mind originated it,” he added. “It pushes us to clarify that the law’s protection is awarded to the act of human creativity, however small or large, and not to the mere act of generating content.”

Another emerging legal challenge is ‘artificial causation’, which is having to figure out who is responsible when AI creates something that causes a legal problem.

Professor Shyamkrishna Balganesh from Columbia Law School said, “When prompted by a human actor, a generative AI application uses the patterns that it learned from voluminous data to generate an output that is seemingly responsive to the prompt and largely simulates a likely human response. However, this output may infringe copyright, contain falsities that is defamatory or violate another individual’s privacy.”

The question then is who or what is responsible for the output: The person who used the tool, the AI system itself, or someone else? Prof Balganesh argued that solving the puzzle of artificial causation in the law is crucial not just for the legal regulation of generative AI, but also for the very working of multiple areas of law where the inquiry remains human-focused.

WIPO will be launching the AI Infrastructure Interchange, known as AIII or “A-triple-I” in December 2025. The AIII will be a dedicated space for exploring technical and operational questions about copyright infrastructure in the age of AI, as well as a neutral forum where creators, rights-holders, developers and experts can exchange ideas and explore practical solutions.


[1] https://www.straitstimes.com/singapore/spore-can-and-must-meaningfully-apply-tech-like-ai-in-a-way-that-creates-jobs-for-locals-pm-wong

New Google-NUS partnership to advance applied AI research and talent development in Singapore

The National University of Singapore (NUS) and Google are embarking on a new strategic collaboration to accelerate applied AI research and nurture skilled AI practitioners. This collaboration reinforces Singapore’s ambition to be a global hub for AI innovation and talent, advances the national digital transformation agenda, and deepens industry-academia partnerships to strengthen the country’s research ecosystem.

NUS and Google exchanged strategic collaboration agreements to establish a joint research and innovation centre during the NUS School of Computing’s (NUS Computing) 50th Anniversary Gala Dinner, where the School commemorated five decades of sterling contributions to computing education and research.

The exchange of signed agreements between Professor Tulika Mitra, Dean of NUS Computing, and Ms Serene Sia, Country Director, Singapore and Malaysia, Google Cloud, was witnessed by Guest-of-Honour, Mr Tan Kiat How, Senior Minister of State, Ministry of Digital Development and Information, together with Professor Tan Eng Chye, NUS President, and Ms Yolyn Ang, Vice President, Knowledge and Information Partnerships, Asia Pacific, Google.

Professor Liu Bin, NUS Deputy President (Research and Technology), said, “Google has been a valued long-term partner of NUS, and we are excited to deepen this strategic relationship. The joint centre brings together NUS’ leadership in AI and multidisciplinary research and Google’s deep research expertise, advanced technologies and tools, as well as well-established pathways for research translation and deployment.

A key pillar of this partnership is talent development — through endowed professorships, mentorship, training programmes, and hands-on research projects. We are grateful to Google for their support to establish a professorship at NUS, which will strengthen faculty leadership and research excellence in AI. We are confident that our joint efforts with Google will nurture the next generation of AI scientists, engineers, and innovators equipped to tackle real-world challenges. Together, we are well-positioned to drive AI breakthroughs that will transform lives, reshape industries, and advance the future of education, healthcare, and beyond.”

Serene Sia, Country Director, Singapore and Malaysia, Google Cloud, said, “Google and NUS share a longstanding partnership, anchored on talent development and applying frontier technologies for public good. These include an on-campus Google Developer Group to equip students with advanced software skills; Google Cloud as a pioneering industry partner of the NUS AI Institute; cultivating talent to tackle biomedical challenges with AI; producing the world’s first AI-powered legal journal podcast with NotebookLM, and a Google PhD Fellowship programme to recognise exceptional work in computer science. Our new collaboration truly builds on those successes; it's a significant step forward in Google’s commitment to bringing new capabilities for scientific discovery to Singapore. NUS has consistently been at the forefront of Singapore's RIE ecosystem, supporting Singapore’s transformation into a knowledge-based, innovation-driven economy and society. By combining NUS’ world-class multidisciplinary research capabilities with Google’s world-class AI research and AI-optimised cloud infrastructure, this joint centre is poised to steer safe and responsible AI development and accelerate scientific progress that transforms public health, learning experiences, and other vital fields."

Joint Research and Innovation Centre

NUS and Google plan to establish a joint research and innovation centre, bringing together resources and technology to pursue experimental or applied AI projects across diverse domains.

There are also plans for a rapid prototyping sandbox to be established and governed by the joint centre to provide a controlled and flexible cloud-based environment, supported by Google Cloud’s power-efficient Tensor Processing Units (TPUs), for experimentation, testing, and validation of the solutions developed under each of the domains, before they are deployed or scaled in real-world settings.

Examples of these domains include:

●      AI in Education: Research aimed at utilising Google Cloud’s Vertex AI platform to develop and evaluate AI-driven tools for adult education, including adaptive scaffolding, blended learning, context-aware nudging, course-specific AI companions, and psychometrically robust measurement of learner engagement, skill acquisition, and career progression. This supports the national emphasis on lifelong learning and continuous upskilling of Singaporeans for the evolving demands of various industries.

●      AI in Legal: The development of a Singapore Law-specific LLM on Google Cloud by NUS Faculty of Law, NUS AI Institute, and NUS Computing. This domain-specific LLM will be grounded on local statutory interpretation, case precedents, and legal nuances to overcome the limitations of generic LLMs in legal contexts. This project aims to provide the core technology for an AI assistant supporting legal research, with potential to enhance productivity and access to contextually-relevant legal information across law firms, the judiciary, and public legal education.

●      AI in Public Health (AI4PH): Research that aims to utilise AI to drive population-level health outcomes, by integrating foundational models with diverse data sources across healthcare, social services, and environmental systems. This will enable the development of agentic AI solutions to support preventive care under programmes like Healthier SG and promote cognitive health and active ageing.

Talent Development Programme and Google-Supported Professorship in AI

Complementing the joint centre, Google plans to establish an AI-focused talent development programme at NUS. This initiative aims to provide training opportunities and certification pathways in Google Cloud AI platforms and tools for NUS students and researchers to accelerate their applied AI research projects.

Additionally, Google intends to establish a Google-supported professorship at NUS to further promote faculty leadership in AI-related fields, foster even deeper collaboration between academia and industry, and contribute to cultivating the next generation of talent in AI and digital innovation.

How MIT LGO alumni are powering Amazon’s global operations

If you’ve urgently ordered a package from Amazon — and exhaled when it arrived on your doorstep hours later — you likely have three graduates of the MIT Leaders for Global Operations (LGO) program to thank: John Tagawa SM ’99; Diego Méndez de la Luz MNG ’04, MBA ’11, SM ’11; or Chuck Cummings MBA ’14, SM ’14.

Each holds critical roles within the company. Tagawa oversees Amazon’s North American operations. Méndez de la Luz heads up operations in Mexico. Cummings leads customer fulfillment throughout Canada. They also mentor LGO students and recent graduates throughout the organization and credit LGO’s singular blend of operational and leadership strength for their success as Amazon grows.

John Tagawa

Tagawa came to Amazon — now the world’s largest online retailer — through an LGO alumni connection in 2008, joining the organization during rapid expansion. He led fulfillment centers on the West Coast and went on to oversee operations in India, South America, and in Europe, with a focus on safety, speed, and efficiency.

Today, he’s a resource for other LGO graduates at Amazon, applauding the program’s uniquely multidimensional focus on tech, engineering, and leadership, all of which are key pillars as the organization continues to grow.

“Today, we have hundreds of fulfillment centers worldwide, and Amazon has grown its transportation and last-mile delivery network in an effort to ensure greater resilience and speed in getting products to customers,” he explains.

Tagawa says that LGO’s unique dual-degree program provided a singular blueprint for success as an operations leader and an engineer.

“The technology and engineering education that I received at MIT plays directly into my day-to-day role. We’re constantly thinking about how to infuse technology and innovate at scale to improve outcomes for our employees and customers. That ranges from introducing robotics to our fulfillment centers to using AI to determine how much inventory we should buy and where we should place it to introducing technology on the shop floor to help our frontline leaders. Those components of my LGO education were critical,” he says. 

After receiving his undergraduate degree at the University of Washington, Tagawa pursued engineering and operations roles. But it wasn’t until LGO that he realized how important the fusion of business, operations, and leadership competencies was.

“What drew me to LGO was being able to study business and finance, coupled with an engineering and leadership education. I hadn’t realized how powerful bringing all three of those disciplines together could be,” he reflects. “Amazon’s efficacy relies on how great our leaders are, and a big part of my role is to develop, coach, and build a great leadership team. The foundation of my ability to do that is based on what I learned at MIT about becoming a lifelong learner.”

Tagawa recalls his own classes with Donald Davis, the late chair and CEO of The Stanley Works. Davis was one of LGO’s first lecturers, sharing case studies from his time on the front lines. Davis imparted the concepts of servant-leadership and diversity, which shaped Tagawa’s outlook at Amazon.

“I get energized by the leadership principles at Amazon. We strive to be Earth's best employer​​ and being customer-obsessed. It’s energizing to lead large-scale organizations whose sole mission is to improve the lives of our employees and customers, with a strong focus on developing great leaders. Who could ask for something better than that?” he asks.

Diego Méndez de la Luz

This blend of leadership acumen and engineering dynamism also jump-started the career of Méndez de la Luz, now Amazon’s country director of Mexico operations. LGO’s leadership focus was crucial in preparing him for his Amazon role, where he oversees the vast majority of Amazon’s 10,000 employees in Mexico — those who work in operations — across 40 facilities throughout his home country.

At MIT, he took classes with notable professors, whom he credits with broadening his intellectual and professional horizons. 

“I was a good student throughout my education, but only after joining LGO did I learn what I consider to be foundational concepts and skills,” says Méndez de la Luz, who also started his career in engineering. “I learned about inventory management, business law, accounting, and about how to have important conversations in the workplace — things I never learned as an engineer. LGO was tremendously useful.”

Méndez de la Luz joined Amazon shortly after LGO, working his way up from frontline management roles at fulfillment centers throughout the United States. Today, he oversees the end-to-end network of imports, fulfillment, transportation, and customer delivery.

At Amazon, he believes he’s making a real difference in his native country. With Amazon’s scale comes the responsibility to improve both the planet and local communities, he says. Amazon engages with communities through volunteer programs, literacy efforts, and partnerships with shelters.

Today, Méndez de la Luz says that he’s working in his “dream job — exactly what I went to MIT for,” in a community he loves.

“My role at Amazon is a great source of pride. When I was growing up, I wanted to be the president of Mexico. I still want to make a difference for people in our society. Here, I have the ability to come back to my home country to create good jobs. Having the ability to do that has been a surprise to me — but a very positive development that I just value so much,” he says. “I want people to feel excited that they’re going to come to work and see their friends and colleagues do well.”

Chuck Cummings

This collaborative atmosphere propelled Cummings to pursue a post-MIT career at Amazon after years as a mechanical engineer. He discovered a hospitable workplace that valued growth: He began as an operations management intern, and today he leads the customer fulfillment business in Canada, which includes the country’s fulfillment centers. It’s a big job made better by his LGO expertise, where he always strives for co-worker and customer satisfaction.

“I sought out LGO because I’ve always loved the shop floor,” he says. “I continue to get excited about: How do we offer faster speeds to Canadian customers? How do we keep lowering our cost structure so that we can continue to invest and offer new benefits for our customers? At the same time, how do I build the absolute best working environment for all of my employees?”

Last year, Cummings’ team launched an Amazon robotics fulfillment center in Calgary, Alberta. This was a significant enhancement for Canadian customers; now, Calgary shoppers have more inventory much closer to home, with delivery speeds to match. Cummings also helped to bring Amazon’s storage and distribution network to a new facility in Vancouver, British Columbia, which will enable nearby fulfillment centers to respond to a wider selection of customer orders at the fastest-possible delivery speeds.

These were substantial endeavors, which he felt comfortable undertaking thanks to his classes at MIT. His experience was so meaningful that Cummings now serves as Amazon’s co-school captain for LGO, where he recruits the next generation of LGO graduates for internships and full-time roles. Cummings has now worked with more than 25 LGO graduates, and he says they’re easy to pick out of a crowd.

“You can give them very ambiguous, complex problems, and they can dive into the data and come out with an amazing solution. But what makes LGO students even more special is, at the same time, they have strong communication skills. They have a lot of emotional intelligence. It’s a combination of business leadership with extreme technical understanding,” he says.

Both Tagawa and Méndez de la Luz interact frequently with LGO students, too. They agree that, while Amazon’s technology is always unfolding, its leadership qualities remain constant — and match perfectly with LGO’s reputation for creating dynamic, empathetic professionals who also prize technical skill.

“Whereas technology has grown and changed by leaps and bounds, leadership principles carry on for decades,” Tagawa says. “The infusion of the engineering, business, and leadership components at LGO are second to none.”

© Photo courtesy of Amazon.com.

Left to right: Chuck Cummings, John Tagawa, and Diego Méndez de la Luz apply their Leaders for Global Operations toolkit to guide Amazon's operations across North America.

Working through pain? You’re not alone.

Nicole Maestas

Nicole Maestas.

Niles Singer/Harvard Staff Photographer

Work & Economy

Working through pain? You’re not alone.

Researchers use Dutch tool to pursue full scale of functional limitations in U.S. labor force

Alvin Powell

Harvard Staff

3 min read

A new study of functional abilities in the U.S. labor market reveals a workforce both vulnerable and resilient, with a large majority of workers reporting multiple limitations even as they fulfill their job duties, according to researchers at Harvard Medical School.

Nicole Maestas, head of the Medical School’s Department of Health Care Policy, said that the findings reflect worrying nationwide trends.

“Prior research finds that people in midlife are less healthy than people who are older now were at midlife,” said Maestas, the John D. MacArthur Professor of Economics and Health Care Policy. “And it’s even true that younger people are less healthy than the midlife people were when they were younger.”

The study, published in June in the Proceedings of the National Academy of Sciences, employed a tool developed in the Netherlands to assess disability claims. The Dutch tool measures 97 job-related functional abilities, providing a far more granular picture of American workers than the U.S. government’s disability measure, which considers six domains.

“We haven’t seen a detailed portrait like this of the American workforce,” Maestas said. “It’s not that we’re measuring it better, it’s that we’re measuring it for the first time.”   

“We haven’t seen a detailed portrait like this of the American workforce.”

Nicole Maestas

The study of 3,396 working adults age 22 and older found that three-fourths faced at least one functional limitation. It also indicated that U.S. workers average more than five functional limitations each. The most prevalent limitations are upper-body strength and range of motion of one’s torso. Also common are limitations related to sensitivity to the ambient environment — hot weather, for example — and to knee function. Other limitations include problems linked to the immune system, head and neck movements, emotional regulation, and cognition.

The researchers also asked workers about underlying medical issues. The conditions that cause the greatest number of functional limitations are mental illness, joint conditions such as arthritis, substance use disorder, and asthma and chronic obstructive pulmonary disease.

The data was collected in 2019. The National Institute on Aging grant supporting the project has been canceled, but Maestas said that researchers managed to collect additional data early this year for a follow-up study that she hopes will identify targets for intervention.

While the employment of people with functional limitations is a success of the U.S. labor market, the new paper highlights the vulnerability of the workforce and, by extension, the national economy, Maestas said. The highest levels of functional limitations were seen in jobs that involve constant physical labor, as well as clerical, service, and sales positions. Many of these roles are essential. The upshot is a workforce less equipped for the impact of a pandemic or some other major disruption.

“The fact that so many people with functional limitations are working is a success,” Maestas said. “It also reveals points of vulnerability when you’re thinking about the country’s broader economic performance. The backdrop of this study is the fact that the U.S. population is aging at its most rapid clip ever. We knew this was coming but you have more people retiring than are coming into the workforce. We need workers in order to keep our economy growing.”

Slavery researchers seek more detailed picture of pre-Civil War Harvard

Campus & Community

Slavery researchers seek more detailed picture of pre-Civil War Harvard

Gabriel Raeburn and Christine Bachman-Sanders reviews documents.

Gabriel Raeburn and Christine Bachman-Sanders review documents.

Photo courtesy of Claire Vail at American Ancestors

Jacob Sweet

Harvard Staff Writer

9 min read

Careful effort to identify leaders, faculty, and staff is key to descendants probe: ‘This work takes time to do well’

In their efforts to trace the descendants of enslaved people connected to Harvard, researchers with American Ancestors first had to tackle a surprisingly difficult question: Who were the University’s pre-Civil War leaders, faculty, and staff?

Now, a once-scattered record is steadily coming into focus.

The work started soon after the University accepted the recommendation of the Presidential Committee on Harvard & the Legacy of Slavery to identify, engage, and support direct descendants of slavery. In their report, released in 2022, the committee identified several Harvard leaders, faculty, and staff who enslaved people. Among them were philanthropist Benjamin Bussey, who built his wealth through the trans-Atlantic trade of products produced by enslaved people and later donated his estate to Harvard College, and steward Andrew Bordman, who owned and relied on eight enslaved people to feed Harvard students and complete his job duties.

Efforts to identify pre-Civil War leaders, faculty, and staff have been underway since 2023, in parallel with research to identify direct descendants of enslaved individuals. Researchers with the Harvard Slavery Remembrance Program led this aspect of the work, while American Ancestors advanced the direct descendant research. In January, American Ancestors also took the lead on the research to identify Harvard officials.  

“At first glance, it seems like a straightforward task to ‘identify leadership, faculty, and staff,’” said Lindsay Fulton, chief research officer at American Ancestors. “But that’s a modern perspective that’s shaped by access to yearbooks, alumni directories, and carefully maintained records. Those tools didn’t always exist, so our researchers had to get creative in locating where, and how, these names were documented. In our experience, this work takes time to do well.”

“Those tools didn’t always exist, so our researchers had to get creative in locating where, and how, these names were documented. In our experience, this work takes time to do well.”

Lindsay Fulton

For well over 200 years — from Harvard’s founding in 1636 to the end of the Civil War in 1865 — the University existed when slavery was legal in at least parts of the U.S. Even after 1783, when slavery was effectively banned in Massachusetts, leaders, faculty, and staff could still come into ownership of enslaved people, often referred to as servants, through relatives or have businesses that were closely tied to the labor of enslaved people.

While positions like the University’s president and treasurer are easy to trace through hundreds of years of history, others are not.

Gabriel Raeburn reviews documents with fellow American Ancestors researcher Christine Bachman-Sanders.

Gabriel Raeburn reviews documents with fellow American Ancestors researcher Christine Bachman-Sanders.

Photo courtesy of Claire Vail at American Ancestors

“The University doesn’t have its own compiled digital staff directory before the modern era,” said Gabriel Raeburn, senior research project manager at American Ancestors. “The first step, even for finding enslaved people, was to go through thousands and thousands of pages of dense archival records in 17th- and 18th-century cursive to work out who the people are who worked at the University.”

To identify leaders, faculty, and staff, researchers continue to comb through handwritten notes from University meetings, as well as stewards’ books, faculty records, colonial and state legislative charters, church rosters, city archives, and a variety of other sources to recreate a roster from the ground up. Through this work, researchers at the Harvard Slavery Remembrance Program and American Ancestors have verified more than 3,000 members of leadership, faculty, and staff from this period.

A foundation for deeper knowledge

Figuring out who worked at pre-Civil War Harvard often begins with fragments of information: a brief mention in centuries-old meeting notes, a class registry. For Harvard’s early history, identifying the makeup of the leadership, staff, and faculty requires a deep understanding of the University’s connections to colonial and local church leadership and knowing where to look. Unearthing this information leverages proven genealogical methodologies which the researchers at American Ancestors are skilled at applying.

For example, for two centuries, certain members of the colonial government and ministers of local towns and cities were automatically granted positions on Harvard’s Board of Overseers. Therefore, researchers looked to legislative acts and Harvard’s colonial charters to see which roles were automatically granted leadership positions — like the Congregational ministers of Boston, Cambridge, Charlestown, Watertown, Dorchester, and Roxbury — and are using church documents to determine which individuals were on Harvard’s board.

These contextual methods are particularly important to try to bridge gaps in Harvard’s own archives. One such gap owes to a 1764 fire that destroyed much of the University’s collections.

At the bottom of these handwritten notes from a 1737 meeting between president and faculty, six new waiters are identified by last name.

At the bottom of these handwritten notes from a 1737 meeting between president and faculty, six new waiters are identified by last name.

Harvard University, Harvard University Archives, UAI5_5_B08_V12-METS

Additionally, for those without extensive experience in records-based genealogy, the records that the University has in its possession can be difficult to decode. Most are written in script of variable clarity and consistency.

Researchers must also know where to look. For example, in earlier years of the University’s existence, it was during Harvard Corporation meetings that leaders appointed paid staff members — often current or recently graduated students. In most cases, notetakers did not list the new staff members by full name. Instead, most are referred to by last name, and in cases where there are multiple students at Harvard with that same name, by a mark of seniority. Someone with the last name Smith who was appointed as head cook, for example, might be referred to as Smith Jr.

Genealogists at American Ancestors now have a system for categorizing people with the same last name. For certain periods of University history, figuring out which Smith was appointed for a new position means going through records and establishing which Smith was the youngest at the time. In other periods, whether a person was referred to as senior, junior, III, or IV depended on their relative social standing. Both require researchers to peruse contemporary records and identify the proper Smith. At times, distinguishing between family members requires researchers to search through birth and death records held both in Massachusetts and across the country.

Understanding who worked at the University allows researchers to then explore whether these individuals owned enslaved people. It also gives researchers a bird’s-eye view of the interconnected names, families, and communities that shaped Harvard.

“These individuals did not operate in isolation,” said Fulton. “They studied together, taught together, published together, worshipped together, and often their children married one another. Understanding this complex, living network makes our conclusions more comprehensive, more accurate, and more reflective of the institution’s true historical landscape.”

“Understanding this complex, living network makes our conclusions more comprehensive, more accurate, and more reflective of the institution’s true historical landscape.”

Lindsay Fulton

During the period being studied by researchers, the size of the University — both students and staff — increased greatly. Harvard’s first graduating class, in 1642, included just nine students. Throughout the 17th century, there were five years in which no students graduated at all. Precise documentation of staff and faculty was sometimes hard to come by. Over time, the number of Harvard faculty, staff, and students grew and documentation improved. In 1860, Harvard awarded more than 200 degrees across the College, Medical School, and Law School.

As the University expanded, the number of individuals to sort through increased, but documents produced during these times help simplify the process. For instance, entries in the Massachusetts Register, published annually by the state beginning in 1767, recorded each new appointment to the University. Researchers can use the list and verify it with primary sources.

‘Marathon of research’

This work to establish a robust list of Harvard leaders, faculty, and staff is enabling American Ancestors not only to more accurately identify individuals who enslaved people, but also to begin uncovering the names of those who were enslaved — and, ultimately, to trace their living descendants.

The researchers emphasize that the different components of this work continue simultaneously. In addition to identifying former University leaders, faculty, and staff, researchers contributing to the Harvard Slavery Remembrance Program are working to identify those who were enslaved and their living descendants. To date, 964 formerly enslaved people and 591 living descendants of these individuals have been identified.

After pinpointing members of Harvard’s faculty, staff, and leadership, researchers from American Ancestors turn to more historical documents, like tax lists, to identify individuals who enslaved people.

After pinpointing members of Harvard’s faculty, staff, and leadership, researchers from American Ancestors turn to more historical documents, like tax lists, to identify individuals who enslaved people.

Photo courtesy of Claire Vail at American Ancestors

The meticulous nature of records-based genealogy is slow, and the scope can be hard to predict. On TV shows like “Finding Your Roots,” hosted by Alphonse Fletcher University Professor Henry Louis Gates Jr., guests learn about their genealogy in a single episode. In reality, the genealogical work behind each episode of “Finding Your Roots,” which is fact-checked at American Ancestors and focuses on a single person, takes about six months.

“Genealogical research is painstaking work — poring over centuries-old records, tracing forgotten names, and piecing together histories that have often been lost or obscured,” said Gates. “It demands not just patience and rigor, but a passion for discovery. That’s why American Ancestors is the perfect organization to do this work for Harvard. Their deep expertise, meticulous attention to detail, and unwavering commitment to uncovering the stories of our past make them uniquely qualified to take on this vital work.”

“Genealogical research is painstaking work — poring over centuries-old records, tracing forgotten names, and piecing together histories that have often been lost or obscured.”

Henry Louis Gates Jr.

The identification of a clear list of pre-Civil War leaders, faculty, and staff, according to American Ancestors researchers, will lead to a much fuller picture of the University’s ties to slavery — and create a useful foundation for future research and engagement with living direct descendants. Fulton said that the 3,000 individuals they’ve identified as leaders, faculty, and staff far exceeded their initial estimate and gave the group a more accurate — and expansive — view of their work.

“Getting this right is critical — it’s the starting line for what will be a marathon of research,” Fulton said. “And in a marathon, you don’t want to head off in the wrong direction and realize halfway through that you need to double back.”

AI helps chemists develop tougher plastics

A new strategy for strengthening polymer materials could lead to more durable plastics and cut down on plastic waste, according to researchers at MIT and Duke University.

Using machine learning, the researchers identified crosslinker molecules that can be added to polymer materials, allowing them to withstand more force before tearing. These crosslinkers belong to a class of molecules known as mechanophores, which change their shape or other properties in response to mechanical force.

“These molecules can be useful for making polymers that would be stronger in response to force. You apply some stress to them, and rather than cracking or breaking, you instead see something that has higher resilience,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering at MIT, who is also a professor of chemistry and the senior author of the study.

The crosslinkers that the researchers identified in this study are iron-containing compounds known as ferrocenes, which until now had not been broadly explored for their potential as mechanophores. Experimentally evaluating a single mechanophore can take weeks, but the researchers showed that they could use a machine-learning model to dramatically speed up this process.

MIT postdoc Ilia Kevlishvili is the lead author of the open-access paper, which appeared Friday in ACS Central Science. Other authors include Jafer Vakil, a Duke graduate student; David Kastner and Xiao Huang, both MIT graduate students; and Stephen Craig, a professor of chemistry at Duke.

The weakest link

Mechanophores are molecules that respond to force in unique ways, typically by changing their color, structure, or other properties. In the new study, the MIT and Duke team wanted to investigate whether they could be used to help make polymers more resilient to damage.

The new work builds on a 2023 study from Craig and Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry at MIT, and their colleagues. In that work, the researchers found that, surprisingly, incorporating weak crosslinkers into a polymer network can make the overall material stronger. When materials with these weak crosslinkers are stretched to the breaking point, any cracks propagating through the material try to avoid the stronger bonds and go through the weaker bonds instead. This means the crack has to break more bonds than it would if all of the bonds were the same strength.

To find new ways to exploit that phenomenon, Craig and Kulik joined forces to try to identify mechanophores that could be used as weak crosslinkers.

“We had this new mechanistic insight and opportunity, but it came with a big challenge: Of all possible compositions of matter, how do we zero in on the ones with the greatest potential?” Craig says. “Full credit to Heather and Ilia for both identifying this challenge and devising an approach to meet it.”

Discovering and characterizing mechanophores is a difficult task that requires either time-consuming experiments or computationally intense simulations of molecular interactions. Most of the known mechanophores are organic compounds, such as cyclobutane, which was used as a crosslinker in the 2023 study.

In the new study, the researchers wanted to focus on molecules known as ferrocenes, which are believed to hold potential as mechanophores. Ferrocenes are organometallic compounds that have an iron atom sandwiched between two carbon-containing rings. Those rings can have different chemical groups added to them, which alter their chemical and mechanical properties.

Many ferrocenes are used as pharmaceuticals or catalysts, and a handful are known to be good mechanophores, but most have not been evaluated for that use. Experimental tests on a single potential mechanophore can take several weeks, and computational simulations, while faster, still take a couple of days. Evaluating thousands of candidates using these strategies is a daunting task.

Realizing that a machine-learning approach could dramatically speed up the characterization of these molecules, the MIT and Duke team decided to use a neural network to identify ferrocenes that could be promising mechanophores.

They began with information from a database known as the Cambridge Structural Database, which contains the structures of 5,000 different ferrocenes that have already been synthesized.

“We knew that we didn’t have to worry about the question of synthesizability, at least from the perspective of the mechanophore itself. This allowed us to pick a really large space to explore with a lot of chemical diversity, that also would be synthetically realizable,” Kevlishvili says.

First, the researchers performed computational simulations for about 400 of these compounds, allowing them to calculate how much force is necessary to pull atoms apart within each molecule. For this application, they were looking for molecules that would break apart quickly, as these weak links could make polymer materials more resistant to tearing.

Then they used this data, along with information on the structure of each compound, to train a machine-learning model. This model was able to predict the force needed to activate the mechanophore, which in turn influences resistance to tearing, for the remaining 4,500 compounds in the database, plus an additional 7,000 compounds that are similar to those in the database but have some atoms rearranged.

The researchers discovered two main features that seemed likely to increase tear resistance. One was interactions between the chemical groups that are attached to the ferrocene rings. Additionally, the presence of large, bulky molecules attached to both rings of the ferrocene made the molecule more likely to break apart in response to applied forces.

While the first of these features was not surprising, the second trait was not something a chemist would have predicted beforehand, and could not have been detected without AI, the researchers say. “This was something truly surprising,” Kulik says.

Tougher plastics

Once the researchers identified about 100 promising candidates, Craig’s lab at Duke synthesized a polymer material incorporating one of them, known as m-TMS-Fc. Within the material, m-TMS-Fc acts as a crosslinker, connecting the polymer strands that make up polyacrylate, a type of plastic.

By applying force to each polymer until it tore, the researchers found that the weak m-TMS-Fc linker produced a strong, tear-resistant polymer. This polymer turned out to be about four times tougher than polymers made with standard ferrocene as the crosslinker.

“That really has big implications because if we think of all the plastics that we use and all the plastic waste accumulation, if you make materials tougher, that means their lifetime will be longer. They will be usable for a longer period of time, which could reduce plastic production in the long term,” Kevlishvili says.

The researchers now hope to use their machine-learning approach to identify mechanophores with other desirable properties, such as the ability to change color or become catalytically active in response to force. Such materials could be used as stress sensors or switchable catalysts, and they could also be useful for biomedical applications such as drug delivery.

In those studies, the researchers plan to focus on ferrocenes and other metal-containing mechanophores that have already been synthesized but whose properties are not fully understood.

“Transition metal mechanophores are relatively underexplored, and they’re probably a little bit more challenging to make,” Kulik says. “This computational workflow can be broadly used to enlarge the space of mechanophores that people have studied.”

The research was funded by the National Science Foundation Center for the Chemistry of Molecularly Optimized Networks (MONET).

© Image credit: David W. Kastner

A new strategy for strengthening polymer materials could lead to more durable plastics and cut down on plastic waste, MIT and Duke University researchers report.

MIT tool visualizes and edits “physically impossible” objects

M.C. Escher’s artwork is a gateway into a world of depth-defying optical illusions, featuring “impossible objects” that break the laws of physics with convoluted geometries. What you perceive his illustrations to be depends on your point of view — for example, a person seemingly walking upstairs may be heading down the steps if you tilt your head sideways

Computer graphics scientists and designers can recreate these illusions in 3D, but only by bending or cutting a real shape and positioning it at a particular angle. This workaround has downsides, though: Changing the smoothness or lighting of the structure will expose that it isn’t actually an optical illusion, which also means you can’t accurately solve geometry problems on it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a unique approach to represent “impossible” objects in a more versatile way. Their “Meschers” tool converts images and 3D models into 2.5-dimensional structures, creating Escher-like depictions of things like windows, buildings, and even donuts. The approach helps users relight, smooth out, and study unique geometries while preserving their optical illusion.

This tool could assist geometry researchers with calculating the distance between two points on a curved impossible surface (“geodesics”) and simulating how heat dissipates over it (“heat diffusion”). It could also help artists and computer graphics scientists create physics-breaking designs in multiple dimensions.

Lead author and MIT PhD student Ana Dodik aims to design computer graphics tools that aren’t limited to replicating reality, enabling artists to express their intent independently of whether a shape can be realized in the physical world. “Using Meschers, we’ve unlocked a new class of shapes for artists to work with on the computer,” she says. “They could also help perception scientists understand the point at which an object truly becomes impossible.”

Dodik and her colleagues will present their paper at the SIGGRAPH conference in August.

Making impossible objects possible

Impossible objects can’t be fully replicated in 3D. Their constituent parts often look plausible, but these parts don’t glue together properly when assembled in 3D. But what can be computationally imitated, as the CSAIL researchers found out, is the process of how we perceive these shapes.

Take the Penrose Triangle, for instance. The object as a whole is physically impossible because the depths don’t “add up,” but we can recognize real-world 3D shapes (like its three L-shaped corners) within it. These smaller regions can be realized in 3D — a property called “local consistency” — but when we try to assemble them together, they don’t form a globally consistent shape.

The Meschers approach models’ locally consistent regions without forcing them to be globally consistent, piecing together an Escher-esque structure. Behind the scenes, Meschers represents impossible objects as if we know their x and y coordinates in the image, as well as differences in z coordinates (depth) between neighboring pixels; the tool uses these differences in depth to reason about impossible objects indirectly.

The many uses of Meschers

In addition to rendering impossible objects, Meschers can subdivide their structures into smaller shapes for more precise geometry calculations and smoothing operations. This process enabled the researchers to reduce visual imperfections of impossible shapes, such as a red heart outline they thinned out.

The researchers also tested their tool on an “impossibagel,” where a bagel is shaded in a physically impossible way. Meschers helped Dodik and her colleagues simulate heat diffusion and calculate geodesic distances between different points of the model.

“Imagine you’re an ant traversing this bagel, and you want to know how long it’ll take you to get across, for example,” says Dodik. “In the same way, our tool could help mathematicians analyze the underlying geometry of impossible shapes up close, much like how we study real-world ones.”

Much like a magician, the tool can create optical illusions out of otherwise practical objects, making it easier for computer graphics artists to create impossible objects. It can also use “inverse rendering” tools to convert drawings and images of impossible objects into high-dimensional designs. 

“Meschers demonstrates how computer graphics tools don’t have to be constrained by the rules of physical reality,” says senior author Justin Solomon, associate professor of electrical engineering and computer science and leader of the CSAIL Geometric Data Processing Group. “Incredibly, artists using Meschers can reason about shapes that we will never find in the real world.”

Meschers can also aid computer graphics artists with tweaking the shading of their creations, while still preserving an optical illusion. This versatility would allow creatives to change the lighting of their art to depict a wider variety of scenes (like a sunrise or sunset) — as Meschers demonstrated by relighting a model of a dog on a skateboard.

Despite its versatility, Meschers is just the start for Dodik and her colleagues. The team is considering designing an interface to make the tool easier to use while building more elaborate scenes. They’re also working with perception scientists to see how the computer graphics tool can be used more broadly.

Dodik and Solomon wrote the paper with CSAIL affiliates Isabella Yu ’24, SM ’25; PhD student Kartik Chandra SM ’23; MIT professors Jonathan Ragan-Kelley and Joshua Tenenbaum; and MIT Assistant Professor Vincent Sitzmann. 

Their work was supported, in part, by the MIT Presidential Fellowship, the Mathworks Fellowship, the Hertz Foundation, the U.S. National Science Foundation, the Schmidt Sciences AI2050 fellowship, MIT Quest for Intelligence, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, SystemsThatLearn@CSAIL initiative, Google, the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, Adobe Systems, the Singapore Defence Science and Technology Agency, and the U.S. Intelligence Advanced Research Projects Activity.

© Image: Alex Shipps/MIT CSAIL, using assets from Pixabay and the researchers

“Meschers” can create multi-dimensional versions of objects that break the laws of physics with convoluted geometries, such as buildings you might see in an M.C. Escher illustration (left) and objects that are shaded in impossible ways (center and right).

MIT tool visualizes and edits “physically impossible” objects

M.C. Escher’s artwork is a gateway into a world of depth-defying optical illusions, featuring “impossible objects” that break the laws of physics with convoluted geometries. What you perceive his illustrations to be depends on your point of view — for example, a person seemingly walking upstairs may be heading down the steps if you tilt your head sideways

Computer graphics scientists and designers can recreate these illusions in 3D, but only by bending or cutting a real shape and positioning it at a particular angle. This workaround has downsides, though: Changing the smoothness or lighting of the structure will expose that it isn’t actually an optical illusion, which also means you can’t accurately solve geometry problems on it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a unique approach to represent “impossible” objects in a more versatile way. Their “Meschers” tool converts images and 3D models into 2.5-dimensional structures, creating Escher-like depictions of things like windows, buildings, and even donuts. The approach helps users relight, smooth out, and study unique geometries while preserving their optical illusion.

This tool could assist geometry researchers with calculating the distance between two points on a curved impossible surface (“geodesics”) and simulating how heat dissipates over it (“heat diffusion”). It could also help artists and computer graphics scientists create physics-breaking designs in multiple dimensions.

Lead author and MIT PhD student Ana Dodik aims to design computer graphics tools that aren’t limited to replicating reality, enabling artists to express their intent independently of whether a shape can be realized in the physical world. “Using Meschers, we’ve unlocked a new class of shapes for artists to work with on the computer,” she says. “They could also help perception scientists understand the point at which an object truly becomes impossible.”

Dodik and her colleagues will present their paper at the SIGGRAPH conference in August.

Making impossible objects possible

Impossible objects can’t be fully replicated in 3D. Their constituent parts often look plausible, but these parts don’t glue together properly when assembled in 3D. But what can be computationally imitated, as the CSAIL researchers found out, is the process of how we perceive these shapes.

Take the Penrose Triangle, for instance. The object as a whole is physically impossible because the depths don’t “add up,” but we can recognize real-world 3D shapes (like its three L-shaped corners) within it. These smaller regions can be realized in 3D — a property called “local consistency” — but when we try to assemble them together, they don’t form a globally consistent shape.

The Meschers approach models’ locally consistent regions without forcing them to be globally consistent, piecing together an Escher-esque structure. Behind the scenes, Meschers represents impossible objects as if we know their x and y coordinates in the image, as well as differences in z coordinates (depth) between neighboring pixels; the tool uses these differences in depth to reason about impossible objects indirectly.

The many uses of Meschers

In addition to rendering impossible objects, Meschers can subdivide their structures into smaller shapes for more precise geometry calculations and smoothing operations. This process enabled the researchers to reduce visual imperfections of impossible shapes, such as a red heart outline they thinned out.

The researchers also tested their tool on an “impossibagel,” where a bagel is shaded in a physically impossible way. Meschers helped Dodik and her colleagues simulate heat diffusion and calculate geodesic distances between different points of the model.

“Imagine you’re an ant traversing this bagel, and you want to know how long it’ll take you to get across, for example,” says Dodik. “In the same way, our tool could help mathematicians analyze the underlying geometry of impossible shapes up close, much like how we study real-world ones.”

Much like a magician, the tool can create optical illusions out of otherwise practical objects, making it easier for computer graphics artists to create impossible objects. It can also use “inverse rendering” tools to convert drawings and images of impossible objects into high-dimensional designs. 

“Meschers demonstrates how computer graphics tools don’t have to be constrained by the rules of physical reality,” says senior author Justin Solomon, associate professor of electrical engineering and computer science and leader of the CSAIL Geometric Data Processing Group. “Incredibly, artists using Meschers can reason about shapes that we will never find in the real world.”

Meschers can also aid computer graphics artists with tweaking the shading of their creations, while still preserving an optical illusion. This versatility would allow creatives to change the lighting of their art to depict a wider variety of scenes (like a sunrise or sunset) — as Meschers demonstrated by relighting a model of a dog on a skateboard.

Despite its versatility, Meschers is just the start for Dodik and her colleagues. The team is considering designing an interface to make the tool easier to use while building more elaborate scenes. They’re also working with perception scientists to see how the computer graphics tool can be used more broadly.

Dodik and Solomon wrote the paper with CSAIL affiliates Isabella Yu ’24, SM ’25; PhD student Kartik Chandra SM ’23; MIT professors Jonathan Ragan-Kelley and Joshua Tenenbaum; and MIT Assistant Professor Vincent Sitzmann. 

Their work was supported, in part, by the MIT Presidential Fellowship, the Mathworks Fellowship, the Hertz Foundation, the U.S. National Science Foundation, the Schmidt Sciences AI2050 fellowship, MIT Quest for Intelligence, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, SystemsThatLearn@CSAIL initiative, Google, the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, Adobe Systems, the Singapore Defence Science and Technology Agency, and the U.S. Intelligence Advanced Research Projects Activity.

© Image: Alex Shipps/MIT CSAIL, using assets from Pixabay and the researchers

“Meschers” can create multi-dimensional versions of objects that break the laws of physics with convoluted geometries, such as buildings you might see in an M.C. Escher illustration (left) and objects that are shaded in impossible ways (center and right).

Is dirty air driving up dementia rates?

Antonella Zanobetti.

Antonella Zanobetti.

Veasey Conway/Harvard Staff Photographer

Health

Is dirty air driving up dementia rates?

Federal funding cuts halt 3 studies exploring how pollution and heat affect the brain and heart

Liz Mineo

Harvard Staff Writer

4 min read

Antonella Zanobetti was conducting groundbreaking research to examine links between exposure to environmental factors, such as pollution and heat, and deadly neurological and cardiovascular diseases. But three of her studies came to a halt with the Trump administration’s mass cancellation of Harvard research grants in May.

Preliminary evidence suggests air pollution harms the brain, said Zanobetti, an environmental epidemiologist and principal research scientist at the T.H. Chan School of Public Health. She had hoped that her studies would raise awareness of potential links between exposure and increased risk of dementia, as well as explore the protective effects of modifiable risk factors such as green space.

“It’s crucial to finish all the work that we are doing,” said Zanobetti, who led a team of researchers in 2020 to conduct the first national study on air pollution’s effect on Alzheimer’s and Parkinson’s. “We need to understand the factors that can impact hospitalization for neurological disorders. The high prevalence of neurodegenerative diseases is a matter of public health.”

Fueled by aging and industrialization, neurological disorders are surging around the country and the world. Alzheimer’s disease is the sixth leading cause of death in the U.S., and the death rates for Parkinson’s are rising fast. The number of people globally with Parkinson’s is projected to reach more than 12 million by 2040.

“It’s important to understand the role of environmental exposures on neurological disorders to help develop public health policies.”

For one of Zanobetti’s halted studies, her team was analyzing Medicare and Medicaid claims to estimate how long-term exposure to air pollution may increase hospitalizations for Alzheimer’s and related dementias. “We wanted to assess whether air pollution exposure increases risk of mortality and/or hastens rehospitalization,” she said.

Collecting the data was challenging because when patients with Alzheimer’s or Parkinson’s are hospitalized, their neurodegenerative disease is often not the main reason. “It could be a stroke or a fall,” said Zanobetti. “We were in the middle of developing methods to overcome statistical challenges, including outcome misclassification, in addition to studying the impact of heat on hospitalizations.”

Another study, co-led by Danielle Braun, examining the effect of heat and other environmental exposures on hospitalizations for Parkinson’s was supposed to have two more years of funding when it was canceled.

“We were in the middle of looking at the effects of high temperature and other air pollutants on Parkinson’s hospitalization,” said Zanobetti. “We wanted to estimate the chronic and acute effects of multiple environmental exposures to understand the impact of air pollution, heat, or other exposure on Parkinson’s hospitalizations.”

Zanobetti had a third grant terminated. Co-led by Petros Koutrakis, the study was to be the first to provide evidence of the effects of particle radioactivity on heart disease, which is the leading cause of death in the U.S.

Particulate matter, or tiny particles of air pollutants, can be inhaled and reach the lungs, the heart, and the brain, said Zanobetti. Particle radioactivity is caused by radionuclides in the air that attach to ambient fine-particle pollution and, after inhalation, release ionizing radiation inside the body.

The Environmental Protection Agency has used previous research by Zanobetti and her team on particulate matter’s impact of on health to lower National Ambient Air Quality Standards for fine particulate matter in order to reduce health risks linked to air pollution. Last year, her work and that of other T.H. Chan School of Public Health researchers helped establish more rigorous federal regulations on particulate air pollution.

Overall, Zanobetti’s three canceled grants sought to provide scientific evidence of the links between environmental factors and Alzheimer’s, Parkinson’s, and heart disease to inform the development of policies that would improve air quality and protect public health, she said.

“It’s important to understand the role of environmental exposures on neurological disorders to help develop public health policies,” said Zanobetti. “It’s really heartbreaking to see that everything we worked for has been stopped. There is so much to discover, so much to learn, and we cannot do it.”

‘By mid-March, corpses littered the street like newspapers’

Nation & World

‘By mid-March, corpses littered the street like newspapers’

long read

Young Ukrainian mother and her toddler left to fend for themselves after husband joins soldiers defending Mariupol

Excerpted from “By the Second Spring: Seven Lives and One Year of the War in Ukraine” by Danielle Leavitt, Ph.D. ’23.

By the end of February, Leonid had begun taking food and supplies to the Ukrainian soldiers at the front lines of Mariupol’s defense. He talked about them constantly — he called them “his guys” — and he worried about them, regaling Maria with how their positions were changing and they weren’t getting the help they needed. He bought carton upon carton of cigarettes and as many jugs of water as he could find, then drove through the shelling to deliver them. He was eager to help, and even as the barrages intensified and Maria said she didn’t want him to go anymore, he still went several more times.

On March 1, Maria and Leonid decided that staying in their apartment for any length of time during the daylight hours was no longer an option. They would shelter in the basement. For the time being, they would still sleep in the apartment — mainly for comfort — but if things got even worse, they’d begin sleeping in the cellar, too. Explosions, shelling, and shock waves were so frequent that darting from the basement to do anything — grab an item from the apartment, get some fresh air, cook food — risked sudden death.

Maria’s older sister, her husband, and their toddler son had also joined Leonid, Maria, and David by the beginning of March, and they stayed in the cellar for 12 hours at a time, trying to keep everyone warm and fed and entertain the two babies. In their courtyard, Leonid broke down the crates and old furniture they found in the basement to build a fire. He melted snow to boil and cooked soup and dried pasta.

Photo by Carolyn Moffat

On March 3, Leonid began preparing his military clothes. He had received some ribbons from those he visited on the front lines — ribbons that suggested a specific group or unit — and she saw him sew them on the chest of his uniform. He was enlisting, and she was watching it happen. Before the full-scale invasion, young men in Ukraine were required to serve 12 to 18 months in the army, but as Russia invaded, Ukraine did away with that policy. The state instead implemented new conscription practices, allowing the government to summon for service any able-bodied man between the ages of 27 and 60, including those without former military experience. Later, Ukraine would lower this age to 25 years. Men would often receive a summons to report to a recruitment center, after which they would be medically examined and sent off for a short stint in training. Early on, many men and women volunteered without a summons, a surge that sustained the army in the first months of the war.

Leonid had completed his compulsory military service in the previous years. Though he was not summoned, seeing the situation deteriorate so rapidly in his hometown compelled him to rejoin the ranks.

On March 5, Leonid drove across town to wish his mother a happy birthday. It confused Maria that he’d risked exposure during an air raid simply to see her, but he insisted on going there in person.

Early the next morning, Leonid gently woke Maria. “We need to say goodbye,” he whispered. Still groggy, she shrugged him off. “Maria, it’s time to say goodbye,” he insisted. He had already been out that morning on a reconnaissance mission. She didn’t understand. “What are you talking about?” She yawned.

“Let’s say goodbye. I need to go.”

“Let’s say goodbye. I need to go.”

She pushed her eyes open, and he looked at her with a seriousness that scared her. He did not look away.

“No, no, Leonid,” she whispered. She would have to talk sense into him, beg him to stay. “No, Lyonya,” she said, using his familiar shortened name. “You can’t leave me,” she pleaded, “David, our life. What goes on there is not for you. Let’s leave together, we can try to get out through the humanitarian corridor, we can go as a family.”

He cast his eyes down. “I have to go, Maria.” Watching him carefully, she knew he was serious — she had never seen him this resolute, as though his face had turned to stone, as though nothing she did, no threats, no pleading, no weeping, could keep him there. He tried to embrace her, and she stiffened, flaring with anger and grief. He turned to walk out into the stairwell.

“I was in a stupor, I just lay there, stuck. I didn’t understand,” Maria said.

Her parents then told her to go chase after him, talk to him. Following him into the stairwell, Maria caught up with him. Leonid was upset. He twitched with agitation and emotion.

“Maybe you’ll at least hug me?” he said, and she did, and the pain sliced through them. Before he could change his mind or she could say anything, he turned and jogged down the stairwell.

Telling me the story several months later, her voice wavered with emotion: “I truly did not think he would go. But I watched him leave.”

The next day, Leonid’s father came to check on them and bring some food.

“Where’s Leonid?” he asked.

Maria realized that Leonid had not told anyone else, not even his parents when he’d gone to see them.

“Where is Leonid?” her father-in-law asked again.

“He went to fight,” Maria said.

With Leonid gone, Maria knew she would need to fortify herself. Despite her stubbornness and resilience, she had come to rely on him in their relationship. Without him, Maria knew she could not expect anyone to help her anymore.

By the time Leonid left, her parents and sister, along with her sister’s husband and young son, were staying with her in the same basement. Their basement was large, and to get to the part of it where they could sit down, where they had built a small encampment, they had to walk through dark tunnels, feeling their way along the cold stone walls. Her mother did not hear well and her father did not walk well, and Maria’s days quickly evolved into the singular pursuit of food, water, and heat. I will do everything now, she told herself constantly, like a mantra. I can do everything now. I will be the strong one. Later that day Leonid’s colleague came and brought Maria a letter from Leonid. It was a short note, but he wrote that everything was OK with him, he was safe and healthy, he was thinking about them, he loved them. She knew he felt guilty for leaving, she could hear it in his note. If he would just come back, she thought, they could have a long talk and sort it all out. But with every passing day, he didn’t. She got very little concrete information from him — only an occasional check-in to say that he was OK and he loved them — and she was furious.

Though they never took off their coats or shoes in case they had to run, the children screamed constantly from cold. Maria and her family tried occupying them in the basement by playing games, telling stories, and rocking them to sleep. But explosions roared outside relentlessly, frightening and waking the children. They could not let the kids watch TV or play on tablets or phones because any battery life they had on their devices was a precious commodity reserved exclusively for communication.

They became dirty quickly, and there was no water to wash themselves. Maria crawled out of the basement a couple of times a day to make a fire in the courtyard and prepare soup with potatoes and canned fish. They also boiled pasta and fried it with tomatoes and onions. Sick to their stomachs with anxiety and constantly cold, Maria and her sister couldn’t bring themselves to eat much. They were both breastfeeding and started to lose their milk supply, which further distressed the children, who batted at their breasts begging for milk that was not coming.

Every day was the same: They were awakened by the sounds of shelling, a distinct metallic whir followed by concussive blasts at impact, then a couple of hours of silence. They waited every moment for it to begin again, wondering if the shelling would be closer this time. When the bombing began once more, she’d go so rigid that the edges of all her body’s muscles would ache. Taking a deep breath, she’d run to David, pick him up, hold him close, sing him songs, and rock him gently, a meditative motion she did as much for her own comfort as for his.

Periodically, at her own risk, she took David to the apartment to run around for 10 minutes or so. “It drove me crazy that I was sitting there in the basement,” she said. “It was so dark, my eyes couldn’t see at all when I came out into the light.”

When a bar of service appeared on her phone, she’d receive a handful of messages — from her sister in Kharkiv, from friends who had already evacuated, from Leonid. He would not say where he was fighting, but she knew he was in the city. Witnessing the daily carnage, he urged her and the family to leave Mariupol as soon as they could.

After he left on March 6, Leonid came back in person three times: once on March 8 to wish her a happy International Women’s Day — a major holiday in former Soviet countries — then on March 11, and finally on March 13. Each time it was, as Maria writes, “for literally one minute,” except the last visit, when he was able to stay for five. He met her outside the basement, hugged her, and ran quickly to the cellar to see David, swooping in, picking David up, and hugging him tight, trying to make him laugh. The last time Leonid came, he ran to the basement, where David was sleeping, and laid his face near his son’s for a moment.

The last time Leonid came, he ran to the basement, where David was sleeping, and laid his face near his son’s for a moment.

What kind of conversation can two people have in one minute? She told him that she had been making a fire in the courtyard, what they were eating, if they’d had any news from her sister. He told her to leave the city immediately, as soon as they could arrange an evacuation vehicle. He’d meet them wherever they went as soon as it was over. As he shifted to leave, they hugged, and she looked away so that she didn’t fall apart and cling to his clothes, begging him to stay like a woman possessed. Then he ran off.

“I didn’t know what he had become,” she wrote later. “I didn’t understand at all. I didn’t understand the essence of the disaster.”

Because the city was constantly, indiscriminately shelled, leaving it posed enormous risk. People who tried to escape were killed every day, hit by shells or shrapnel or snipers. At checkpoints, Russian soldiers often forced evacuees to undress and examined their tattoos. They confiscated phones and searched texts, emails, and photos for any indication of Ukrainian patriotism. Maria was 23 years old, and small. She worried that at a checkpoint she would have no capacity to defend herself against rape, assault, or abduction, especially because she would travel with her parents, both of whom were in poor health and could do little to protect her. They decided they’d wait a few more days to see if things calmed down. “How long could this unending bombardment possibly continue?” she wondered. But Leonid insisted that they must get out — that things would never return to normal, that there was no life left to be had in Mariupol. By then the police force in Mariupol had collapsed, and the next day, the Mariupol Drama Theater was bombed. A thousand civilians had sheltered underneath the building and several hundred were killed.

By mid-March, corpses littered the street like newspapers, victims of violence, hunger, or untreated infections. People were scared to look too closely. What if you recognized them? Eventually the Russian troops occupying certain parts of the city began collecting the bodies in trucks and depositing them in the city square.

Maria occasionally returned to the apartment to retrieve toys or secure the windows and doors, trying to keep it pristine. She still held out hope that eventually they’d return to that apartment and resume their life. From there, she caught broader views of the city. “I had a view from the window, I saw absolutely everything,” she wrote. “The whole city was burning.” She could see, in the distance, one of the large steel factories in town, Azovstal, glowing. Smoke rose in a continuous black cloud over the horizon. At night, the sky glowed pink, and buildings crackled in flames or smoldered, collapsing piece by piece.

By March 19, Maria decided they needed to leave. They had no more candles or matches. “We were just walking by inertia, in the darkness. I was trying to feel my way to the doors to get out of the basement.” She and her sister gathered their possessions in the apartment, letting the children get a better sleep in the beds a final time before departing in the morning. Through the middle of the night, Maria and her sister pumped breast milk for the journey to ensure that they would not need to lift their shirts and could calm the babies with bottles in a pinch. As they pumped in silence, they heard a whistle and planes roaring overhead. Somewhere near them an air assault was underway, and when the bomb dropped, they felt their building sway, the furniture sliding across the floor.

For most an evacuation ride was extremely difficult to secure. Though drivers came with their cars and buses from throughout Ukraine to help in the effort, the route was dangerous, and drivers began charging high prices — several hundred dollars — for rides just beyond the city limits. Maria and Leonid’s car had been damaged by a shell, so it was not reliable, but Leonid’s father agreed to take them. Only part of their group — Maria, David, and Maria’s dad — could fit on the first trip; the others, Maria’s sister, nephew, brother-in-law, and mom, would need to wait until Leonid’s father got back and was ready for another trip. It would be, they hoped, just a day or two. With a white ribbon tied to the car to indicate they were civilians, they inched through the city toward the checkpoint. They lived on the outskirts, and it was a short drive to the edge of the city. “As we pulled out onto the main street, I saw that every house was burned down. There were tanks lying around on the roads, buses overturned, people were digging graves at every step — every step, wherever there was a free spot.” Their city was gone, replaced with ghosts. She went on: “Where there had been trees, or in the fields, where there used to be just gardens, now bodies are just lying there. And people walk, people walk on them.”

Their city was gone, replaced with ghosts.

They crossed through 15 checkpoints to leave the city. Russian soldiers rifled through her bags, patted her body, looked at her son. They made men strip naked and stole food and belongings. After hours of waiting, Maria’s party crossed the city limits into a village on their way to Zaporizhzhya, the closest major city under Ukrainian control, 120 miles away. She had never been to Zaporizhzhya. In fact, she had never been much of anywhere at all. Except for a few short trips to neighboring cities and one to Kyiv, she’d spent her whole life in the city behind her, just like her parents, just like her grandmother Vera before her.

“She is nearby,” Maria said, “I know this for certain.”

Published by Farrar, Straus and Giroux. Copyright © 2025 by Danielle Leavitt. All rights reserved.

Creative spark: New Cambridge musical based on 'haunted summer' of 1816 and birth of 'Frankenstein'

Natasha Atkinson, left, and Nat Riches have created the new musical based on the legendary 1816 meeting of minds.

Nat Riches, who studies Natural Sciences at Trinity College, and Natasha Atkinson, who graduated from Downing College with a Law degree in 2024, were intrigued by the ‘haunted summer’ of 1816, when tumultuous weather confined writers including Lord Byron, Percy Shelley and Mary Godwin (later Mary Shelley) to a villa beside Lake Geneva.

1816: The Year Without A Summer is one of 2 Cambridge student productions selected by the Cambridge University Musical Theatre Society for the Camden and Edinburgh Fringe festivals this summer.

The musical brings to life the literary characters and Byron’s personal doctor John Polidori and his lover Claire Clairmont, who were cooped up in Villa Diodati by the terrible weather wrought by the huge volcanic eruption of 1815 in the then Dutch East Indies.

As rain lashes down, the wind gets up and it’s dark by the afternoon, Byron sets the assembled guests a challenge: write the most chilling ghost story. The challenge led to the conception of Mary Shelley’s Frankenstein and John Polidori’s The Vampyre, 2 of the greatest horror stories of the last 2 centuries.

1816: The Year Without A Summer premieres at the Camden Fringe Festival on 6 and 7 August.

Read more about the musical on Trinity College's website.

Cambridge students – one a researcher in the electrical currents of heart tissue – have created a new musical partly inspired by the genesis of Mary Shelley's monster, which was brought to life by electricity.

Natasha Atkinson, left, and Nat Riches have created the new musical based on the legendary 1816 meeting of minds.

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Harvard aligns resources for combating bias, harassment

Campus & Community

Harvard aligns resources for combating bias, harassment

Peggy Newell and Nicole Merhill.

Peggy Newell (left) and Nicole Merhill.

Harvard file photos

Nicole Rura

Harvard Correspondent

8 min read

Office for Community Support, Non-Discrimination, Rights and Responsibilities targets discrimination, bullying, sexual harassment, and other misconduct

Harvard on Monday announced the establishment of the new Office for Community Support, Non-Discrimination, Rights and Responsibilities (CSNDR), a move that aligns resources, supports, and policy implementation previously housed across the Office for Community Conduct (OCC) and the Office for Gender Equity (OGE).

Nicole Merhill, the director of CSNDR and the University’s Title IX coordinator, and Peggy Newell, vice president and deputy to the president, spoke with the Gazette about this new alignment of resources and supports available to all members of the community, the laws and policies the new office upholds, and the shared responsibility for creating a safer and more inclusive community.


What is the Office for Community Support, Non-Discrimination, Rights and Responsibilities (CSNDR)?

Newell: This new office brings together all of the important work happening under the Office for Community Conduct and the Office for Gender Equity and continues it in one place, with the aim of making it easier for members of our community to know what resources and supports are available to them and where they can go in order to access them.

Merhill: Under the newly formed CSNDR umbrella, we will have further aligned these resources and supports — the confidential SHARE team, the prevention team, and the NDAB [Non-Discrimination and Anti-Bullying] and Title IX compliance team. Both the prevention team and the compliance team have expanded their portfolios to cover Title IX, other sexual misconduct, non-discrimination, and anti-bullying. The SHARE team remains dedicated to serving community members who may have experienced sexual harassment, sexual assault, stalking, abusive relationships, or discrimination on the basis of gender or sexual orientation.

The CSNDR office works to provide accessible information on discrimination, including antisemitism and Islamophobia, sexual harassment, other sexual misconduct, and bullying, which are grounded in the commitment to working to ensure that every member of our community has the opportunity to learn, conduct research, and work in an environment free from discrimination, harassment, and other forms of harm. Before this merger, OCC focused on implementing the University’s policies and procedures for non-discrimination and anti-bullying, while the Title IX team within OGE focused on implementing the University’s policies and procedures addressing sexual harassment and other sexual misconduct.

Newell: Nicole came to Harvard from the federal agency that oversees both Title IX and Title VI as well as other federal civil rights laws. During her nearly 10 years here as the director of OGE and as the University Title IX coordinator, she has built strong relationships across Harvard’s Schools and in our community. We’re very fortunate to have her — a civil rights attorney who knows both what is required by these regulations and how to navigate Harvard systems to increase access to support — leading this new CSNDR office.

Title VI

Prohibits discrimination on the basis of race, color, and national origin in programs and activities receiving federal financial assistance. 

Title IX

Prohibits sex-based discrimination in education programs and activities that receive federal financial assistance. 

Why were OCC and OGE combined?

Newell: We believe the new structure will improve access to supports and resources available to members of our community and the expectations built into our policies, as well as our ability to respond to policy violations appropriately, when they happen.

Merhill: Yes, OCC and the Title IX team within OGE had parallel missions — to provide information about their respective policies and procedures, support community members regarding those policies, review concerns under the policies including examining systemic impact, handling formal complaints, informal resolutions, appeals, and hearings under these policies.

Newell: We recognized that our community was confused by different offices handling concerns that touched on issues of discrimination. Now, the NDAB and Title IX compliance team within CSNDR can support individuals in response to issues of discrimination, bullying, sexual harassment and other sexual misconduct, which is more convenient and efficient, and responsive to what we have heard from community members. Also, many of the School-based staff who serve as local designated resources for non-discrimination and anti-bullying also serve as local Title IX resource coordinators. With all of those considerations in mind, combining OCC and the Title IX team within OGE into one compliance team under CSNDR is a better way to serve our community.

What can community members expect from this change?

Merhill: All previous resources, including the good work of the prevention team and our confidential SHARE team, will continue.

The prevention team’s mission will expand to look at how we strengthen capacity across our community to combat forms of harm broadly, whether it’s in the realm of discrimination based on a protected class, sexual harassment, or anti-bullying. It’s a nice alignment, because often our prevention team would lead bystander training and be asked to incorporate race-based or other protected class discrimination.

The invaluable SHARE team remains dedicated to providing individual and community-level support to those who may have experienced sexual harassment, sexual assault, stalking, abusive relationships, or discrimination on the basis of gender or sexual orientation. Additionally, the SHARE team will continue to offer confidential accountability support for individuals and communities who may have caused harm. These critically supportive resources have not changed.

And the new NDAB and Title IX compliance team allows us to be more efficient by being able to address policy-related issues in one space, under the University’s policies addressing non-discrimination, anti-bullying, sexual harassment, and other sexual misconduct, without a potential hurdle of separate or duplicate outreach and engagement that could emerge in the previous structure.

In addition to bringing together existing staff from OCC and the Title IX team within OGE, over the summer we hired a new staff member who serves as the University’s Title VI coordinator and deputy for compliance. We are also in the process of hiring two additional staff members — a deputy for Title VI and Title IX compliance, who will support our network of local Title IX resource coordinators and local designated resources and serve as a facilitator of informal resolutions, and a deputy Title VI coordinator and case manager, who will consult on complaints of discrimination, including all complaints of antisemitism. Each of these roles will bring additional support and expertise to the NDAB and Title IX compliance team.

You mentioned that you expanded the resources in the compliance team. Can you tell us more about those changes?

Merhill: On the compliance side, as I mentioned earlier, over the summer we filled a new position, the Title VI coordinator and secretary for compliance, who oversees the formal complaint side of the work and who has already been working with our Schools and community members on these issues. Our newest staff member has extensive experience addressing concerns of Title VI and Title IX discrimination at the federal level, including investigating and resolving concerns of sexual harassment, racial harassment, and discrimination on the basis of shared ancestry, including antisemitism, Islamophobia, and other forms of harm.

When we were assessing the new NDAB and Title IX compliance team’s needs, we also heard from the community a desire for the processes related to reviewing and responding to complaints to proceed more quickly. Based on that feedback, we created and are actively recruiting two new positions to provide additional support for the community and make those processes more efficient: a deputy for Title VI and Title IX compliance and a deputy Title VI coordinator and case manager.

CSNDR is responsible for providing essential trainings on non-discrimination, sexual harassment, and other misconduct. Do you anticipate any changes to training that the University offers?

Merhill: Today we rolled out an eLearning module to all incoming and returning students across the University. All students are required to complete the course in order to be enrolled in courses at Harvard.

On Sept. 8, the module will be assigned to all staff, faculty, and postdoctoral fellows. This module will be substantially similar to the module provided to students, but it also includes information for faculty, staff, and postdoctoral fellows on their role as “responsible employees” for matters under the University’s Title IX and other sexual misconduct policies.

The course takes about an hour to complete.

Newell: We really appreciate our community members taking the time, and we welcome everyone’s feedback as this is our first iteration where we combine non-discrimination and harassment, including antisemitism and Islamophobia, sexual harassment, and other sexual misconduct into one module.

Merhill: It’s an hour of time spent on issues that are exceedingly important for all of us to recognize, understand, and actively address. The module includes information on our community expectations as reflected in our policies, our own individual responsibilities in our work to meet those expectations, what the University’s responsibilities are, and what resources are available if someone encounters one of these concerns.

Where can Harvard community members learn more about the resources CSNDR provides?

Merhill: In addition to the information in the eLearning initiative, we rolled out a new website at csndr.harvard.edu today. We encourage everyone to visit the website and also provide feedback. The website is organized according to each team’s services and resources, and you are invited to visit each of the teams to learn more about their work in each of those spaces. We look forward to continuing and deepening our work in this important space.

Youssef Marzouk appointed associate dean of MIT Schwarzman College of Computing

Youssef Marzouk ’97, SM ’99, PhD ’04, the Breene M. Kerr (1951) Professor in the Department of Aeronautics and Astronautics (AeroAstro) at MIT, has been appointed associate dean of the MIT Schwarzman College of Computing, effective July 1.

Marzouk, who has served as co-director of the Center for Computational Science and Engineering (CCSE) since 2018, will work in his new role to foster a stronger community among bilingual computing faculty across MIT. A key aspect of this work will be providing additional structure and support for faculty members who have been hired into shared positions in departments and the college.

Shared faculty at MIT represent a new generation of scholars whose research and teaching integrate the forefront of computing and another discipline (positions that were initially envisioned as “bridge faculty” in the 2019 Provost’s Task Force reports). Since 2021, the MIT Schwarzman College of Computing has been steadily growing this cohort. In collaboration with 24 departments across the Institute, 20 faculty have been hired in shared positions: three in the School of Architecture and Planning; four in the School of Engineering; seven in the School of Humanities, Arts, and Social Sciences; four in the School of Science; and two in the MIT Sloan School of Management.

“Youssef’s experience leading cross-cutting efforts in research and education in CCSE is of direct relevance to the broader goal of bringing MIT’s computing bilinguals together in meaningful ways. His insights and collaborative spirit position him to make a lasting impact in this role. We are delighted to welcome him to this new leadership position in the college,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

“I’m excited that Youssef has agreed to take on this important role in the college. His thoughtful approach and nuanced understanding of MIT’s academic landscape make him ideally suited to support our shared faculty community. I look forward to working closely with him,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, head of the Department of Electrical Engineering and Computer Science (EECS), and the MathWorks Professor of EECS.

Marzouk’s research interests lie at the intersection of computational mathematics, statistical inference, and physical modeling. He and his students develop and analyze new methodologies for uncertainty quantification, Bayesian computation, and machine learning in complex physical systems. His recent work has centered on algorithms for data assimilation and inverse problems; high-dimensional learning and surrogate modeling; optimal experimental design; and transportation of measure as a tool for statistical inference and generative modeling. He is strongly motivated by the interplay between theory, methods, and diverse applications, and has collaborated with other researchers at MIT on topics ranging from materials science to fusion energy to the geosciences.

In 2018, he was appointed co-director of CCSE with Nicolas Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering. An interdisciplinary research and education center dedicated to advancing innovative computational methods and applications, CCSE became one of the academic units of the MIT Schwarzman College of Computing when it formally launched in 2020.

CCSE has grown significantly under Marzouk and Hadjiconstantinou’s leadership. Most recently, they spearheaded the design and launch of the center’s new standalone PhD program in computational science and engineering, which will welcome its second cohort in September. Collectively, CCSE’s standalone and interdisciplinary PhD programs currently enroll more than 70 graduate students.

Marzouk is also a principal investigator in the MIT Laboratory for Information and Decision Systems, and a core member of MIT’s Statistics and Data Science Center.

Among his many honors and awards, he was named a fellow of the Society for Industrial and Applied Mathematics (SIAM) in 2025. He was elected associate fellow of the American Institute of Aeronautics and Astronautics (AIAA) in 2018 and received the National Academy of Engineering Frontiers of Engineering Award in 2012, the MIT Junior Bose Award for Teaching Excellence in 2012, and the DOE Early Career Research Award in 2010. His recent external engagement includes service on multiple journal editorial boards; co-chairing major SIAM conferences and elected service on various SIAM committees; leadership of scientific advisory boards, including that of the Institute for Computational and Experimental Research in Mathematics (ICERM); and organizing many other international programs and workshops.

At MIT, in addition to co-directing CCSE, Marzouk has served as both graduate and undergraduate officer of the Department of AeroAstro. He also leads the MIT Center for the Exascale Simulation of Materials in Extreme Environments, an interdisciplinary computing effort sponsored by the U.S. Department of Energy’s Predictive Science Academic Alliance program.

Marzouk received his bachelor’s, master’s, and doctoral degrees from MIT. He spent four years at Sandia National Laboratories, as a Truman Fellow and a member of the technical staff, before joining the MIT faculty in 2009.

© Photo: Jiin Kang

Youssef Marzouk is the Breene M. Kerr (1951) Professor in the Department of Aeronautics and Astronautics.

Ultrasmall optical devices rewrite the rules of light manipulation

In the push to shrink and enhance technologies that control light, MIT researchers have unveiled a new platform that pushes the limits of modern optics through nanophotonics, the manipulation of light on the nanoscale, or billionths of a meter.

The result is a class of ultracompact optical devices that are not only smaller and more efficient than existing technologies, but also dynamically tunable, or switchable, from one optical mode to another. Until now, this has been an elusive combination in nanophotonics.

The work is reported in the July 8 issue of Nature Photonics.

“This work marks a significant step toward a future in which nanophotonic devices are not only compact and efficient, but also reprogrammable and adaptive, capable of dynamically responding to external inputs. The  marriage of emerging quantum materials and established nanophotonics architectures will surely bring advances to both fields,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work. Comin is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics (RLE).

Comin’s colleagues on the work are Ahmet Kemal Demir, an MIT graduate student in physics; Luca Nessi, a former MIT postdoc who is now a postdoc at Politecnico di Milano; Sachin Vaidya, a postdoc in RLE; Connor A. Occhialini PhD ’24, who is now a postdoc at Columbia University; and Marin Soljačić, the Cecil and Ida Green Professor of Physics at MIT.

Demir and Nessi are co-first authors of the Nature Photonics paper.

Toward new nanophotonic materials

Nanophotonics has traditionally relied on materials like silicon, silicon nitride, or titanium dioxide. These are the building blocks of devices that guide and confine light using structures such as waveguides, resonators, and photonic crystals. The latter are periodic arrangements of materials that control how light propagates, much like how a semiconductor crystal affects electron motion.

While highly effective, these materials are constrained by two major limitations. The first involves their refractive indices. These are a measure of how strongly a material interacts with light; the higher the refractive index, the more the material “grabs” or interacts with the light, bending it more sharply and slowing it down more. The refractive indices of silicon and other traditional nanophotonic materials are often modest, which limits how tightly light can be confined and how small optical devices can be made.

A second major limitation of traditional nanophotonic materials: once a structure is fabricated, its optical behavior is essentially fixed. There is usually no way to significantly reconfigure how it responds to light without physically altering it. “Tunability is essential for many next-gen photonics applications, enabling adaptive imaging, precision sensing, reconfigurable light sources, and trainable optical neural networks,” says Vaidya.

Introducing chromium sulfide bromide

These are the longstanding challenges that chromium sulfide bromide (CrSBr) is poised to solve. CrSBr is a layered quantum material with a rare combination of magnetic order and strong optical response. Central to its unique optical properties are excitons: quasiparticles formed when a material absorbs light and an electron is excited, leaving behind a positively charged “hole.” The electron and hole remain bound together by electrostatic attraction, forming a sort of neutral particle that can strongly interact with light.

In CrSBr, excitons dominate the optical response and are highly sensitive to magnetic fields, which means they can be manipulated using external controls.

Because of these excitons, CrSBr exhibits an exceptionally large refractive index that allows researchers to sculpt the material to fabricate optical structures like photonic crystals that are up to an order of magnitude thinner than those made from traditional materials. “We can make optical structures as thin as 6 nanometers, or just seven layers of atoms stacked on top of each other,” says Demir.

And crucially, by applying a modest magnetic field, the MIT researchers were able to continuously and reversibly switch the optical mode. In other words, they demonstrated the ability to dynamically change how light flows through the nanostructure, all without any moving parts or changes in temperature. “This degree of control is enabled by a giant, magnetically induced shift in the refractive index, far beyond what is typically achievable in established photonic materials,” says Demir.

In fact, the interaction between light and excitons in CrSBr is so strong that it leads to the formation of polaritons, hybrid light-matter particles that inherit properties from both components. These polaritons enable new forms of photonic behavior, such as enhanced nonlinearities and new regimes of quantum light transport. And unlike conventional systems that require external optical cavities to reach this regime, CrSBr supports polaritons intrinsically.

While this demonstration uses standalone CrSBr flakes, the material can also be integrated into existing photonic platforms, such as integrated photonic circuits. This makes CrSBr immediately relevant to real-world applications, where it can serve as a tunable layer or component in otherwise passive devices.

The MIT results were achieved at very cold temperatures of up to 132 kelvins (-222 degrees Fahrenheit). Although this is below room temperature, there are compelling use cases, such as quantum simulation, nonlinear optics, and reconfigurable polaritonic platforms, where the unparalleled tunability of CrSBr could justify operation in cryogenic environments.

In other words, says Demir, “CrSBr is so unique with respect to other common materials that even going down to cryogenic temperatures will be worth the trouble, hopefully.”

That said, the team is also exploring related materials with higher magnetic ordering temperatures to enable similar functionality at more accessible conditions.

This work was supported by the U.S. Department of Energy, the U.S. Army Research Office, and a MathWorks Science Fellowship. The work was performed in part at MIT.nano.

© Image: Sampson Wilcox and Michael Hurley/Research Laboratory of Electronics

Graphic illustrating MIT’s new platform for manipulating light on the nanoscale. Thin structures represent patterned chromium sulfide bromide, a layered quantum material with different optical responses (represented by different shades of blue) depending on the application of a magnetic field. The orange and pink structure represents the resulting enhancement of light-matter interactions.

Ultrasmall optical devices rewrite the rules of light manipulation

In the push to shrink and enhance technologies that control light, MIT researchers have unveiled a new platform that pushes the limits of modern optics through nanophotonics, the manipulation of light on the nanoscale, or billionths of a meter.

The result is a class of ultracompact optical devices that are not only smaller and more efficient than existing technologies, but also dynamically tunable, or switchable, from one optical mode to another. Until now, this has been an elusive combination in nanophotonics.

The work is reported in the July 8 issue of Nature Photonics.

“This work marks a significant step toward a future in which nanophotonic devices are not only compact and efficient, but also reprogrammable and adaptive, capable of dynamically responding to external inputs. The  marriage of emerging quantum materials and established nanophotonics architectures will surely bring advances to both fields,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work. Comin is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics (RLE).

Comin’s colleagues on the work are Ahmet Kemal Demir, an MIT graduate student in physics; Luca Nessi, a former MIT postdoc who is now a postdoc at Politecnico di Milano; Sachin Vaidya, a postdoc in RLE; Connor A. Occhialini PhD ’24, who is now a postdoc at Columbia University; and Marin Soljačić, the Cecil and Ida Green Professor of Physics at MIT.

Demir and Nessi are co-first authors of the Nature Photonics paper.

Toward new nanophotonic materials

Nanophotonics has traditionally relied on materials like silicon, silicon nitride, or titanium dioxide. These are the building blocks of devices that guide and confine light using structures such as waveguides, resonators, and photonic crystals. The latter are periodic arrangements of materials that control how light propagates, much like how a semiconductor crystal affects electron motion.

While highly effective, these materials are constrained by two major limitations. The first involves their refractive indices. These are a measure of how strongly a material interacts with light; the higher the refractive index, the more the material “grabs” or interacts with the light, bending it more sharply and slowing it down more. The refractive indices of silicon and other traditional nanophotonic materials are often modest, which limits how tightly light can be confined and how small optical devices can be made.

A second major limitation of traditional nanophotonic materials: once a structure is fabricated, its optical behavior is essentially fixed. There is usually no way to significantly reconfigure how it responds to light without physically altering it. “Tunability is essential for many next-gen photonics applications, enabling adaptive imaging, precision sensing, reconfigurable light sources, and trainable optical neural networks,” says Vaidya.

Introducing chromium sulfide bromide

These are the longstanding challenges that chromium sulfide bromide (CrSBr) is poised to solve. CrSBr is a layered quantum material with a rare combination of magnetic order and strong optical response. Central to its unique optical properties are excitons: quasiparticles formed when a material absorbs light and an electron is excited, leaving behind a positively charged “hole.” The electron and hole remain bound together by electrostatic attraction, forming a sort of neutral particle that can strongly interact with light.

In CrSBr, excitons dominate the optical response and are highly sensitive to magnetic fields, which means they can be manipulated using external controls.

Because of these excitons, CrSBr exhibits an exceptionally large refractive index that allows researchers to sculpt the material to fabricate optical structures like photonic crystals that are up to an order of magnitude thinner than those made from traditional materials. “We can make optical structures as thin as 6 nanometers, or just seven layers of atoms stacked on top of each other,” says Demir.

And crucially, by applying a modest magnetic field, the MIT researchers were able to continuously and reversibly switch the optical mode. In other words, they demonstrated the ability to dynamically change how light flows through the nanostructure, all without any moving parts or changes in temperature. “This degree of control is enabled by a giant, magnetically induced shift in the refractive index, far beyond what is typically achievable in established photonic materials,” says Demir.

In fact, the interaction between light and excitons in CrSBr is so strong that it leads to the formation of polaritons, hybrid light-matter particles that inherit properties from both components. These polaritons enable new forms of photonic behavior, such as enhanced nonlinearities and new regimes of quantum light transport. And unlike conventional systems that require external optical cavities to reach this regime, CrSBr supports polaritons intrinsically.

While this demonstration uses standalone CrSBr flakes, the material can also be integrated into existing photonic platforms, such as integrated photonic circuits. This makes CrSBr immediately relevant to real-world applications, where it can serve as a tunable layer or component in otherwise passive devices.

The MIT results were achieved at very cold temperatures of up to 132 kelvins (-222 degrees Fahrenheit). Although this is below room temperature, there are compelling use cases, such as quantum simulation, nonlinear optics, and reconfigurable polaritonic platforms, where the unparalleled tunability of CrSBr could justify operation in cryogenic environments.

In other words, says Demir, “CrSBr is so unique with respect to other common materials that even going down to cryogenic temperatures will be worth the trouble, hopefully.”

That said, the team is also exploring related materials with higher magnetic ordering temperatures to enable similar functionality at more accessible conditions.

This work was supported by the U.S. Department of Energy, the U.S. Army Research Office, and a MathWorks Science Fellowship. The work was performed in part at MIT.nano.

© Image: Sampson Wilcox and Michael Hurley/Research Laboratory of Electronics

Graphic illustrating MIT’s new platform for manipulating light on the nanoscale. Thin structures represent patterned chromium sulfide bromide, a layered quantum material with different optical responses (represented by different shades of blue) depending on the application of a magnetic field. The orange and pink structure represents the resulting enhancement of light-matter interactions.

STEER India 2025: Exploring community and cultural development in a dynamic nation

In June this year, seven NUS students from various faculties embarked on a nine-day journey to New Delhi and Agra in India as part of the Study Trips for Engagement and EnRichment (STEER) programme led by Associate Professor Loh Wai Lam, Academic Director of NUS Global Relations Office (GRO). The trip, jointly organised by GRO and the Office of International Affairs & Global Initiatives at O.P. Jindal Global University (JGU), was curated to enable the students to experience the vibrancy and dynamism of India through a blend of academic discourse and cultural immersion.

Breaking existing mindsets and experiencing the cultural landscape

Through classroom sessions, site visits, and direct engagement with local communities and institutions, students were able to examine the relationship between community building and cultural expression in India, and gain insights into the intricacies of the country’s social fabric and development efforts.

A key component of the programme was a series of six classroom sessions at JGU which provided students with foundational knowledge of the historical, social, and economic factors shaping community development in India. At a visit to the Constitution Museum on campus, they also learned about the birth of the Indian Constitution, which was established following the country’s independence from British rule.

For second-year Business Administration undergraduate Kuo Tsung Hsun, learning how the nation emerged as the world’s largest democracy was deeply inspiring. “The value of democracy is not something easily earned. This is an idea that I’m beginning to appreciate,” he reflected.

In addition to learning about India’s history and governance, the students examined the enduring impact of the caste system — a social hierarchy that, although legally abolished, continues to influence access, identity, and opportunity in both subtle and overt ways. The students also explored India’s cultural heritage, diving into the preservation of indigenous languages as a means of maintaining community identity, and the role of traditional ecological knowledge in advancing sustainable development.

A visit to a village in Sonipat, where JGU is located, and an interaction with a local self-help group also left a lasting impression on the students. They were deeply moved by the stories of hardship and perseverance from women who had started businesses despite social and financial barriers. Many were inspired by their simple yet powerful message to “just dream”.

Reflecting on the encounter, second-year Psychology undergraduate Ameline Ang said, “These women carved out their journeys not because conditions were ideal, but because they believed change was possible. Their entrepreneurial spirit wasn’t built on access, but on sheer will. It made me reflect on my own privilege and the responsibility I have to create meaningfully, not just comfortably.”

These experiences were further enriched by a visit to the High Commission of the Republic of Singapore in New Delhi, where the students had the opportunity to engage with Singaporean diplomats and gained first-hand insights into their roles and daily lives in India.

No STEER trip would be complete without visits to iconic historical sites. The first visit was to Humayun’s Tomb, a UNESCO World Heritage Site located in Delhi that was built in 1570. As the first garden-tomb in the Indian subcontinent, it represented a turning point in Mughal architecture, setting the stage for later masterpieces like the Taj Mahal, another UNESCO World Heritage Site, which the students visited later in their trip.

Looking back and moving forward

At the end of the programme, the students were asked to reflect on their experiences in India: what were they passionate about; what emotions did they experience; what changes in mindset did they undergo; and what tangible plans can they make and act on?

For Eric Sim, a third-year Information Security undergraduate, the visit marked a fitting end to an insightful and rewarding journey in India. More than just academic learning, the trip offered him fresh perspectives and lasting memories. “It was so much fun that it’s etched in my mind to this day—and hopefully these memories will last for more than a lifetime. I already miss every moment with them. Dhanyavaad (which means ‘Thank you’ in Hindi), everyone.”

 

By NUS Global Relations Office

Cambridge Innovation Capital commits £100m to back University of Cambridge spinouts

Cambridge Innovation Capital (CIC), the VC firm investing in the UK’s highest-potential deep tech and life sciences companies, is committing at least £100m to invest in spinouts from the University of Cambridge.

The funding will seek to take advantage of the vast commercial potential in science and technology innovation developed by Cambridge researchers and follows a series of recent initiatives from the university designed to support entrepreneurial academics. These include plans for four million sq. ft of high-tech development at Cambridge West and a new Innovation Hub in central Cambridge to host spinouts, startups, and entrepreneurs.

Dr Diarmuid O’Brien, Pro-Vice-Chancellor for Innovation at the University of Cambridge, welcomed the investment that will help build on current success: “In 2024, the University of Cambridge created more spinout companies than any other university. It has also produced the most unicorns of any European ecosystem and generates £23 billion in economic interest linked to research and commercialisation each year.”

To coincide with the funding commitment, CIC is launching a new Entrepreneur in Residence (EIR) programme, in partnership with the university, to identify IP with the potential for commercialisation and support academic founders as they begin to build a company. By matching experienced deep tech and life sciences executives, many of whom have achieved significant exits with their previous businesses, with academics and high-potential IP, the EIR programme will increase the number of quality spinouts and accelerate the path towards viable commercialisation of the technology. The EIR programme, will maintain a rolling cohort of up to six EIRs.

This is the latest initiative to support spinouts from the university. The University of Cambridge, through its innovation business, Cambridge Enterprise, has also launched the Founders programme to support new company creation; the Technology Investment Fund (TIF), a new proof of concept fund to de-risk world class research and enable faster commercialisation; and invested an additional £30m of new capital into the £100m AUM Cambridge Enterprise Ventures (CEV) fund to support increasing University investment in spinouts.

“We are determined to do even more, and faster, through initiatives such as the new EIR programme and by attracting investment into our spinout companies working with partners like Cambridge Innovation Capital” added Dr O’Brien.

CIC is committing at least £100m as part of the launch of Fund III, its latest £250 million early-stage venture fund focused on the Cambridge ecosystem, to invest in University of Cambridge spinouts. Companies created within the EIR programme can access the new funding to support development from inception through proof-of-concept to early-stage growth.

Andrew Williamson, Managing Partner, CIC, explained the reasons for focusing on Cambridge University spinouts: “Cambridge is at the forefront of innovation in deep tech and life sciences. Our new EIR programme will provide academics and researchers with access to the £100m we are committing to University of Cambridge spinouts as they continue to develop breakthrough technologies. This expansion of CIC’s long-standing partnership with the University of Cambridge, which provides unique access to the university’s academics and research, will help support the UK’s economic growth by developing the next generation of world-class companies.”

New Entrepreneur in Residence programme builds on work by Cambridge Enterprise and will match serial entrepreneurs with academics to develop world-leading ideas and IP.

Cambridge is at the forefront of innovation in deep tech and life sciences. Our new EIR programme will provide academics and researchers with access to the £100m we are committing to University of Cambridge spinouts as they continue to develop breakthrough technologies.
Andrew Williamson, Managing Partner, CIC,

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

How government accountability and responsiveness affect tax payment

A fundamental problem for governments is getting citizens to comply with their laws and policies. They can’t monitor everyone and catch all the rule-breakers. “It’s a logistical impossibility,” says Lily L. Tsai, MIT’s Ford Professor of Political Science and the director and founder of the MIT Governance Lab.

Instead, governments need citizens to choose to follow the rules of their own accord. “As a government, you have to rely on them to voluntarily comply with the laws, policies, and regulations that are put into place,” Tsai says.

One particularly important thing governments need citizens to do is pay their taxes. In a paper in the October issue of the journal World Development, Tsai and her co-authors, including Minh Trinh ’22, a graduate of the Department of Political Science, look at different factors that might affect compliance with property tax laws in China. They found that study participants in an in-person tax-paying experiment were more likely to pay their taxes if government officials were monitoring and punishing corruption.

“When people think that government authorities are motivated by the public good, have moral character, and have integrity, then the requests that those authorities make of citizens are more likely to seem legitimate, and so they’re more likely to pay their taxes,” Tsai says.

In China, only two cities, Chongqing and Shanghai, collect property taxes. Officials have been concerned that citizens might resist property taxes because homeownership is the main source of urban household wealth in China. Private homeownership accounts for 64 percent of household wealth in China, compared to only 29 percent in the United States.

Tsai and her co-authors wanted to test how governments might make people more willing to pay their property taxes. Researchers have theorized that citizens are more likely to comply with tax laws when they feel like they’re getting something in return from the government. The government can be responsive to citizens’ demands for public services, for example. Or the government can punish officials who are corrupt or perform poorly.

In the first part of the study, a survey of Chinese citizens, respondents expressed preferences for different hypothetical property tax policies. The results suggested that participants wanted the government to be responsive to their needs and to hold officials accountable. People preferred a policy that allowed for citizen input on the use of tax revenue over one that did not, and a policy that allowed for the sanctioning of corrupt officials garnered more support than a policy that did not.

Survey participants also preferred a lighter penalty for not paying their taxes over a harsher penalty, and they supported a tax exemption for first apartments. Interestingly to the researchers, policies that allowed for government responsiveness and accountability received roughly the same support as these policies with economic benefits. “This is evidence to show that we should really pay attention to non-economic factors, because they can have similar magnitudes of impact on tax-paying behavior,” Tsai says.

For the second stage of the study, researchers recruited people for a lab experiment in Shanghai (one of the two cities that collects property taxes). Participants played a game on an iPad in which they chose repeatedly whether or not to pay property taxes. At the end of the game, they received an amount of real money that varied depending on how they and other participants played the game.

Participants were then randomly split into different groups. In one group, participants were given an opportunity to voice their preference for how their property tax revenue was used. Some were told the government incorporated their feedback, while others were told their preferences were not considered — in other words, participants learned whether or not the government was responsive to their needs. In another group, participants learned that a corrupt official had stolen money from property tax revenue. Some were told that the official had been caught and punished, while others were told the official got away with stealing.

The researchers measured whether game players’ willingness to pay property taxes changed after receiving this new information. They found that while the willingness of players who learned the government was responsive to their needs did not change significantly, players who learned the government punished corrupt officials paid their property taxes more frequently.

“It was kind of amazing to see that people care a lot about whether or not higher-level authorities are making sure that tax dollars are not being wasted through corruption,” Tsai says. She argues in her 2021 book, “When People Want Punishment: Retributive Justice and the Puzzle of Authoritarian Popularity,” that when authorities are willing to punish their own officials, it may signal to people that leaders have moral integrity and share the values of ordinary people, making them appear more legitimate.

While the researchers expected to see government responsiveness affect tax payment as well, Tsai says it’s not totally surprising that for people living in places without direct channels for citizen input, the opportunity to participate in the decision-making process in a lab setting might not resonate as strongly.

The findings don’t mean that government responsiveness isn’t important. But they suggest that even when there aren’t opportunities for citizens to make their voices heard, there are other ways for governments to appear legitimate and get people to comply with rules voluntarily.

As the strength of democratic institutions declines globally, scholars wonder whether perceptions of governments’ legitimacy will decline at the same time. “These findings suggest that maybe that’s not necessarily the case,” Tsai says.

© Photo: Howie Wang/Unsplash

A home in Shanghai, China. Shanghai is one of two cities in China that collects property taxes.

How government accountability and responsiveness affect tax payment

A fundamental problem for governments is getting citizens to comply with their laws and policies. They can’t monitor everyone and catch all the rule-breakers. “It’s a logistical impossibility,” says Lily L. Tsai, MIT’s Ford Professor of Political Science and the director and founder of the MIT Governance Lab.

Instead, governments need citizens to choose to follow the rules of their own accord. “As a government, you have to rely on them to voluntarily comply with the laws, policies, and regulations that are put into place,” Tsai says.

One particularly important thing governments need citizens to do is pay their taxes. In a paper in the October issue of the journal World Development, Tsai and her co-authors, including Minh Trinh ’22, a graduate of the Department of Political Science, look at different factors that might affect compliance with property tax laws in China. They found that study participants in an in-person tax-paying experiment were more likely to pay their taxes if government officials were monitoring and punishing corruption.

“When people think that government authorities are motivated by the public good, have moral character, and have integrity, then the requests that those authorities make of citizens are more likely to seem legitimate, and so they’re more likely to pay their taxes,” Tsai says.

In China, only two cities, Chongqing and Shanghai, collect property taxes. Officials have been concerned that citizens might resist property taxes because homeownership is the main source of urban household wealth in China. Private homeownership accounts for 64 percent of household wealth in China, compared to only 29 percent in the United States.

Tsai and her co-authors wanted to test how governments might make people more willing to pay their property taxes. Researchers have theorized that citizens are more likely to comply with tax laws when they feel like they’re getting something in return from the government. The government can be responsive to citizens’ demands for public services, for example. Or the government can punish officials who are corrupt or perform poorly.

In the first part of the study, a survey of Chinese citizens, respondents expressed preferences for different hypothetical property tax policies. The results suggested that participants wanted the government to be responsive to their needs and to hold officials accountable. People preferred a policy that allowed for citizen input on the use of tax revenue over one that did not, and a policy that allowed for the sanctioning of corrupt officials garnered more support than a policy that did not.

Survey participants also preferred a lighter penalty for not paying their taxes over a harsher penalty, and they supported a tax exemption for first apartments. Interestingly to the researchers, policies that allowed for government responsiveness and accountability received roughly the same support as these policies with economic benefits. “This is evidence to show that we should really pay attention to non-economic factors, because they can have similar magnitudes of impact on tax-paying behavior,” Tsai says.

For the second stage of the study, researchers recruited people for a lab experiment in Shanghai (one of the two cities that collects property taxes). Participants played a game on an iPad in which they chose repeatedly whether or not to pay property taxes. At the end of the game, they received an amount of real money that varied depending on how they and other participants played the game.

Participants were then randomly split into different groups. In one group, participants were given an opportunity to voice their preference for how their property tax revenue was used. Some were told the government incorporated their feedback, while others were told their preferences were not considered — in other words, participants learned whether or not the government was responsive to their needs. In another group, participants learned that a corrupt official had stolen money from property tax revenue. Some were told that the official had been caught and punished, while others were told the official got away with stealing.

The researchers measured whether game players’ willingness to pay property taxes changed after receiving this new information. They found that while the willingness of players who learned the government was responsive to their needs did not change significantly, players who learned the government punished corrupt officials paid their property taxes more frequently.

“It was kind of amazing to see that people care a lot about whether or not higher-level authorities are making sure that tax dollars are not being wasted through corruption,” Tsai says. She argues in her 2021 book, “When People Want Punishment: Retributive Justice and the Puzzle of Authoritarian Popularity,” that when authorities are willing to punish their own officials, it may signal to people that leaders have moral integrity and share the values of ordinary people, making them appear more legitimate.

While the researchers expected to see government responsiveness affect tax payment as well, Tsai says it’s not totally surprising that for people living in places without direct channels for citizen input, the opportunity to participate in the decision-making process in a lab setting might not resonate as strongly.

The findings don’t mean that government responsiveness isn’t important. But they suggest that even when there aren’t opportunities for citizens to make their voices heard, there are other ways for governments to appear legitimate and get people to comply with rules voluntarily.

As the strength of democratic institutions declines globally, scholars wonder whether perceptions of governments’ legitimacy will decline at the same time. “These findings suggest that maybe that’s not necessarily the case,” Tsai says.

© Photo: Howie Wang/Unsplash

A home in Shanghai, China. Shanghai is one of two cities in China that collects property taxes.

School of Humanities, Arts, and Social Sciences welcomes 14 new faculty for 2025

Dean Agustín Rayo and the MIT School of Humanities, Arts, and Social Sciences (SHASS) recently welcomed 14 new professors to the MIT community. They arrive with diverse backgrounds and vast knowledge in their areas of research.

Naoki Egami joins MIT as an associate professor in the Department of Political Science. He is also a faculty affiliate of the Institute for Data, Systems, and Society. Egami specializes in political methodology and develops statistical methods for questions in political science and the social sciences. His current research programs focus on three areas: external validity and generalizability; machine learning and artificial intelligence for the social sciences; and causal inference with network and spatial data. His work has appeared in various academic journals in political science, statistics, and computer science, such as American Political Science Review, American Journal of Political Science, Journal of the American Statistical Association, Journal of the Royal Statistical Society (Series B), NeurIPS, and Science Advances. Before joining MIT, Egami was an assistant professor at Columbia University. He received a PhD from Princeton University (2020) and a BA from the University of Tokyo (2015).

Valentin Figueroa joins the Department of Political Science as an assistant professor. His research examines historical state building, ideological change, and scientific innovation, with a regional focus on Western Europe and Latin America. His current book project investigates the disestablishment of patrimonial administrations and the rise of bureaucratic states in early modern Europe. Before joining MIT, he was an assistant professor at the Pontificia Universidad Católica de Chile. Originally from Argentina, Figueroa holds a BA and an MA in political science from Universidad de San Andrés and Universidad Torcuato Di Tella, respectively, and a PhD in political science from Stanford University.

Bailey Flanigan is an assistant professor in the Department of Political Science, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. Her research combines tools from across these disciplines — including social choice theory, game theory, algorithms, statistics, and survey methods — to advance political methodology and strengthen public participation in democracy. She is specifically interested in sampling algorithms, opinion measurement/preference elicitation, and the design of democratic innovations like deliberative minipublics and participatory budgeting. Before joining MIT, Flanigan was a postdoc at Harvard University’s Data Science Initiative. She earned her PhD in computer science from Carnegie Mellon University and her BS in bioengineering from the University of Wisconsin at Madison.

Rachel Fraser is an associate professor in the Department of Linguistics and Philosophy. Before coming to MIT, Fraser taught at Oxford University, where she also completed her graduate work in philosophy. She has interests in epistemology, language, feminism, aesthetics, and political philosophy. At present, her main project is a book manuscript on the epistemology of narrative.

Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His research focuses on how we ought to form beliefs and make decisions. He works in epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics including collective action problems, legal standards of proof, algorithmic fairness, and political polarization, among others. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford. He received his BA From Princeton University in 2006 and his PhD from MIT in 2012.

Rebekah Larsen is an assistant professor in the Comparative Media Studies/Writing program. A media sociologist with a PhD from Cambridge University, her work uncovers and analyzes understudied media ecosystems, with special attention to sociotechnical change and power relations within these systems. Recent scholarly sites of inquiry include conservative talk radio stations in rural Utah (and ethnographic work in conservative spaces); the new global network of fact checkers funded by social media platform content moderation contracts; and search engine manipulation of journalists and activists around a controversial 2010s privacy regulation. Prior to MIT, Larsen held a Marie Curie grant at the University of Copenhagen, and was a visiting fellow at the Information Society Project (Yale Law School). She maintains current affiliations as a faculty associate at the Berkman Klein Center (Harvard Law School) and a research associate at the Center for Governance and Human Rights (Cambridge University).

Pascal Le Boeuf joins the Music and Theater Arts Section as an assistant professor. Described as “sleek, new,” “hyper-fluent,” and “a composer that rocks” by The New York Times, he is a Grammy Award-winning composer, jazz pianist, and producer whose works range from improvised music to hybridizing notation-based chamber music with production-based technology. Recent projects include collaborations with Akropolis Reed Quintet, Christian Euman, Jamie Lidell, Alarm Will Sound, Ji Hye Jung, Tasha Warren, Dave Eggar, Barbora Kolarova and Arx Duo, JACK Quartet, Friction Quartet, Hub New Music, Todd Reynolds, Sara Caswell, Jessica Meyer, Nick Photinos, Ian Chang, Dayna Stephens, Linda May Han Oh, Justin Brown, and Le Boeuf Brothers. He received a 2025 Grammy Award for Best Instrumental Composition, a 2024 Barlow Commission, a 2023 Guggenheim Fellowship, and a 2020 Copland House Residency Award. Le Boeuf is a Harold W. Dodds Honorific Fellow and PhD candidate in music composition at Princeton University.

Becca Lewis is an assistant professor in the Comparative Media Studies/Writing program. An interdisciplinary scholar who examines the rise of right-wing politics in Silicon Valley and online, she holds a PhD in communication theory and research from Stanford University and an MS in social science from the University of Oxford. Her work has been published in academic journals including New Media and Society, Social Media and Society, and American Behavioral Scientist, and in news outlets such as The Guardian and Business Insider. She previously worked as a researcher at the Data and Society Research Institute, where she published the organization’s flagship reports on media manipulation, disinformation, and right-wing digital media. In 2022, she served as an expert witness in the defamation lawsuit brought against Alex Jones by the parents of a Sandy Hook shooting victim.

Ben Lindquist is an assistant professor in the History Section, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His work observes the historical ways that computing has circulated with ideas of religion, emotion, and divergent thinking. “The Feeling Machine,” his first book, under contract with the University of Chicago Press, follows the history of synthetic speech to ask how emotion became a subject of computer science. Before coming to MIT, he was a postdoc in the Science in Human Culture Program at Northwestern University and earned his PhD in history from Princeton University.

Bar Luzon joins the Department of Linguistics and Philosophy as an assistant professor. Luzon completed her BA in philosophy in 2017 at the Hebrew University of Jerusalem, and her PhD in philosophy in 2024 at New York University. Before coming to MIT, she was a Mellon Postdoctoral Fellow in the Philosophy Department at Rutgers University. She works in the philosophy of mind and language, metaphysics, and epistemology. Her research focuses on the nature of representation and the structure of reality. In the course of pursuing these issues, she writes about mental content, metaphysical determination, the vehicles of mental representation, and the connection between truth and different epistemic notions.

Mark Rau is an assistant professor in the Music and Theater Arts Section, with a shared appointment in the MIT School of Engineering in the Department of Electrical Engineering and Computer Science. He is involved in developing graduate programming focused on music technology. He is interested in the fields of musical acoustics, vibration and acoustic measurement, audio signal processing, and physical modeling synthesis, among other areas. As a lifelong musician, his research focuses on musical instruments and creative audio effects. Before joining MIT, he was a postdoc at McGill University and a lecturer at Stanford University. He completed his PhD at Stanford’s Center for Computer Research in Music and Acoustics. He also holds an MA in music, science, and technology from Stanford, as well as a BS in physics and BMus in jazz from McGill University.

Angela Saini joins the Comparative Media Studies/Writing program as an assistant professor. A science journalist and author, she presents television and radio documentaries for the BBC and her writing has appeared in National Geographic, Wired, Science, and Foreign Policy. She has published four books, which have together been translated into 18 languages. Her bestselling 2019 book, “Superior: The Return of Race Science,” was a finalist for the LA Times Book Prize, and her latest, “The Patriarchs: The Origins of Inequality,” was a finalist for the Orwell Prize for Political Writing. She has an MEng from the University of Oxford, and was made an honorary fellow of her alma mater, Keble College, in 2023.

Viola Schmitt is an associate professor in the Department of Linguistics and Philosophy. She is a linguist with a special interest in semantics. Much of her work focuses on trying to understand general constraints on human language meaning; that is, the principles regulating which meanings can be expressed by human languages and how languages can package meaning. Variants of this question were also central to grants she received from the Austrian and German research foundations. She earned her PhD in linguistics from the University of Vienna and worked as a postdoc and/or lecturer at the Universities of Vienna, Graz, Göttingen, and at the University of California at Los Angeles. Her most recent position was as a junior professor at the Humboldt University Berlin.

Paris Smaragdis SM ’97, PhD ’01 joins the Music and Theater Arts Section as a professor with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. He holds a BMus (cum laude ’95) from Berklee College of Music. His research lies at the intersection of signal processing and machine learning, especially as it relates to sound and music. He has been a research scientist at Mitsubishi Electric Research Labs, a senior research scientist at Adobe Research, and an Amazon Scholar with Amazon’s AWS. He spent 15 years as a professor at the University of Illinois Urbana Champaign in the Computer Science Department, where he spearheaded the design of the CS+Music program, and served as an associate director of the School of Computer and Data Science.

© Photos courtesy of SHASS.

Top row, from left to right: Naoki Egami, Valentin Figueroa, Bailey Flanigan, Rachel Fraser, and Brian Hedden. Second row, from left to right: Rebekah Larsen, Pascal Le Boeuf, Becca Lewis, Ben Lindquist, and Bar Luzon. Third row, from left to right: Mark Rau, Angela Saini, Viola Schmitt, and Paris Smaragdis.

How the brain distinguishes oozing fluids from solid objects

Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.

In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.

This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.

“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.

MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.

Stuff vs. things

Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.

Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”

“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.

These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.

To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.

The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.

“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”

Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”

“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.

Physical interactions

The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.

“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.

The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.

They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.

The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.

© Credit: MIT News; Video stills courtesy of the researchers

“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Vivian Paulun says.

New treatment could reduce brain damage from stroke, study in mice shows

Woman visiting an elderly man in hospital

As many as one in four people will have a stroke during their lifetime. This is when a blood clot prevents oxygen from reaching a part of the brain. The first few hours following a stroke are crucial – the blood clot needs to be removed quickly so that the oxygen supply to the brain can be restored; otherwise, the brain tissue begins to die.

Currently, the outcome for stroke patients receiving even the best available treatment, known as mechanical thrombectomy, is still poor, with fewer than one in 10 patients leaving hospital with no neurological impairment.

Professor Thomas Krieg from the Department of Medicine at the University of Cambridge said: “Stroke is a devastating disease. Even for those who survive, there is a significant risk of damage to the brain that can lead to disabilities and a huge impact on an individual’s life. But in terms of treatment, once the stroke is happening, we have only limited options.”

Mechanical thrombectomy is a minimally invasive medical procedure involving the insertion of a thin tube, known as a catheter, into a blood vessel, often through the groin or arm. This is guided to the blood clot, where it is removed by a tiny device, restoring normal blood flow.

Restoring blood flow too suddenly can make things worse, however. This is called ischaemia-reperfusion injury. When blood rushes back into the oxygen-starved tissue (a process known as reperfusion), the damaged cells struggle to cope, leading to the production of harmful molecules called free radicals that can damage cells, proteins, and DNA. This triggers further damage and can cause an inflammatory response.

The Cambridge team has previously shown that when the brain is starved of oxygen, a build-up occurs of a chemical called succinate. When blood flow is restored, the succinate is rapidly oxidised to drive free radical production within mitochondria, the ‘batteries’ that power our cells, initiating the extra damage. This occurs within the first few minutes of reperfusion, but the researchers showed that the oxidation of succinate can be blocked by the molecule malonate.

Professor Mike Murphy from the Medical Research Council Mitochondrial Biology Unit said: “All of this happens very rapidly, but if we can get malonate in quickly at the start of reperfusion, we can prevent this oxidation and burst of free radicals.

“We discovered in our labs that we can get malonate into cells very quickly by lowering the pH a little, making it a bit more acidic, so that it can cross the blood-brain barrier better. If we inject it into the brain just as we’re ready to reperfuse, then we can potentially prevent further damage.”

In a study published in Cardiovascular Research, the team has shown that treating the brain with a form of the chemical known as acidified disodium malonate (aDSM) alongside mechanical thrombectomy greatly decreased the amount of brain damage that occurs from ischaemia-reperfusion injury by as much as 60%.

Dr Jordan Lee, a postdoctoral researcher in the group, developed a mouse model that mimics mechanical thrombectomy, allowing the team to test the effectiveness of aDSM against ischaemia-reperfusion injury.

Dr Lee said: “This approach reduces the amount of dead brain tissue resulting from a stroke. This is incredibly important because the amount of dead brain tissue is directly correlated to the patient’s recovery – to their disability, whether they can still use all their limbs, speak and understand language, for example.”

Mechanical thrombectomy is increasingly used in the NHS, and the researchers hope that with the addition of aDSM as a treatment alongside this intervention, they will be able to improve outcomes significantly when the procedure is more widely adopted.

The team has launched Camoxis Therapeutics, a spin-out company, with support from Cambridge Enterprise, the innovation arm of the University of Cambridge. It is now seeking seed funding to develop the drug further and take it to early-stage clinical trials.

Professor Murphy added: “If it’s successful, this same drug could have much wider applications for other instances of ischemia-reperfusion injuries, such as heart attack, resuscitation, organ transplantation, and so on, which have similar underlying mechanisms.”

The research was supported by the British Heart Foundation, Medical Research Council, Wellcome Trust and the National Institute for Health and Care Research Cambridge Biomedical Research Centre.

Reference

Lee, JJ et al. Local arterial administration of acidified malonate as an adjunct therapy to mechanical thrombectomy in ischemic stroke. Cardiovascular Research; 27 Jun 2025; DOI: 10.1093/cvr/cvaf118

Cambridge scientists have developed and tested a new drug in mice that has the potential to reduce damage to the brain when blood flow is restored following a stroke.

Stroke is a devastating disease. Even for those who survive, there is a significant risk of damage to the brain that can lead to disabilities and a huge impact on an individual’s life
Thomas Krieg
Woman visiting an elderly man in hospital

Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

Bezos Centre @NUS and EnterpriseSG launch new startup competition to boost early-stage sustainable protein innovation

The Bezos Centre for Sustainable Protein at the National University of Singapore (Bezos Centre @NUS) and Enterprise Singapore (EnterpriseSG) announced a new partnership to accelerate early-stage sustainable protein innovation through a startup competition. This is part of Bezos Centre @NUS’ US$3 million in grants over five years to support early and growth-stage startups in this space.

The inaugural Sustainable Protein Startup Competition will identify and support early-stage startups in sustainable protein solutions. Three winning startups will be selected through a competitive pitching process. Each winner will receive funding and in-kind support of up to S$175,000, while gaining access to Singapore's comprehensive ecosystem of resources. The competition will be launched on 1 August 2025 at the first International Summit on Sustainable Protein (ISSP). There are plans to hold the competition annually, supporting up to 15 early-stage global and local startups over a five-year period.

Professor Zhou Weibiao, Acting Director of the Bezos Centre for Sustainable Protein at NUS and Head of the NUS Department of Food Science and Technology, said, “One of our goals at the Bezos Centre for Sustainable Protein at NUS is to bridge world-class research in sustainable protein with real-world impact. This startup competition and the ISSP are natural extensions of that mission — paving the way for promising technologies to move from lab to market, and from local to global impact. Our partnership with EnterpriseSG will further support our aspiration by accelerating innovation and strengthening the sustainable protein ecosystem in Singapore and beyond.”  

Ms Jeannie Lim, Assistant Managing Director for Services and Growth Enterprises at EnterpriseSG, said, “Our partnership with the Bezos Centre @NUS comes at a crucial time to support promising foodtech startups in securing funding to develop their technologies and bring innovative solutions to market. This initiative reflects our commitment to advance promising sustainable food innovations while catalysing greater private sector participation and investments in Singapore's foodtech sector. We welcome foodtech startups to leverage Singapore’s resources, robust innovation ecosystem and progressive regulatory framework to accelerate their R&D efforts and scale their impact across the region.”

Bezos Centre @NUS x EnterpriseSG’s Sustainable Protein Startup Competition

The competition aims to support early-stage startups developing novel sustainable protein technologies that can address critical challenges in food security and climate resilience. This includes innovative solutions in fermentation, cultivated meat, plant-based proteins, and food safety & toxicology. Startups will be evaluated on their technology innovation and readiness, market potential and validation, business model scalability, and team capabilities.

From the entries, eight promising startups will advance to the final pitching round which will be held at the Singapore International Agri-Food Week in November 2025. Three winning startups will be selected, and each winner will receive seed funding and in-kind support of up to S$175,000. Bezos Centre @NUS will provide a S$75,000 cash grant and EnterpriseSG will match this with a S$75,000 Startup SG Founder (Competition) grant. Winners can utilise the seed funding for R&D, prototype development and to conduct pilot trials for sustainable protein products. Besides the funding, winners will receive up to S$25,000 worth of in-kind support from Bezos Centre @NUS. This includes mentorship, laboratory facilities, office space, startup incubation resources, and the opportunity to pitch at an annual startup event.

Interested startups can submit their applications from 1 August to 8 September 2025. More information on the Sustainable Protein Startup Competition is available here

Support for growth-stage sustainable protein startups

As part of its US$3 million funding to accelerate sustainable protein solutions, Bezos Centre @NUS has earmarked up to US$1.5 million to help growth-stage sustainable protein startups and high-potential projects scale up production, strengthen market presence, and drive international growth.

Bezos Centre @NUS plans to award grants of up to US$500,000 each to three promising startups or high-potential projects. EnterpriseSG will be partnering with Bezos Centre @NUS to further enhance this support through the Startup SG Tech Grant matching. More details will be announced in November this year.

ISSP: A collaborative platform to drive innovation and knowledge sharing in sustainable protein

The ISSP is co-organised by the Bezos Centre for Sustainable Protein at NUS, the Department of Food Science and Technology at the NUS Faculty of Science, the Singapore Food Agency, and Enterprise Singapore, with the Agency for Science, Technology and Research as a strategic partner. The two-day event will be held from 31 July to 1 August 2025 at Pan Pacific Hotel Singapore.

Talks and discussions held at ISSP addressed one of the most urgent global challenges: how to sustainably feed a growing and aging global population, projected to exceed 9 billion by 2050, amidst intensifying climate pressures and resource constraints. To cater to the rising demand for safe and high-quality protein, it is imperative to develop innovative solutions that guarantee food security and optimal nutrition while reducing environmental impacts. ISSP provides a unique platform for experts, stakeholders, and innovators from diverse fields to converge and collaborate on advancing sustainable proteins.

Core themes at ISSP include:

• Scientific Knowledge Exchange: Showcasing cutting-edge research on sustainable proteins by global and regional thought leaders.

• Technology Translation and Growth: Accelerating promising research and commercialisation efforts through competitions, industry feedback, and curated investor engagement.

• Ecosystem Building: Facilitating collaborations among academia, startups, government bodies and industry.

• Showcasing Capabilities: Highlighting ready-to-license innovations and technical expertise from research institutes, universities, and enterprise partners.

ISSP convened more than 200 participants from top research institutes, alternative protein hubs, industry, and policy circles. Notable speakers at the summit included Professor Francesco Branca, Invited Professor from the Institute of Global Health at the University of Geneva, and Dr Lynnette Marie Neufeld, Director of the Food and Nutrition Division at the Food and Agriculture Organisation of the United Nations (FAO).

More information on ISSP is available at: https://issp2025.sg/index.php

Bezos Centre @NUS and EnterpriseSG launch new startup competition to boost early-stage sustainable protein innovation

The Bezos Centre for Sustainable Protein at the National University of Singapore (Bezos Centre @NUS) and Enterprise Singapore (EnterpriseSG) announced a new partnership to accelerate early-stage sustainable protein innovation through a startup competition. This is part of Bezos Centre @NUS’ US$3 million in grants over five years to support early and growth-stage startups in this space.

The inaugural Sustainable Protein Startup Competition will identify and support early-stage startups in sustainable protein solutions. Three winning startups will be selected through a competitive pitching process. Each winner will receive funding and in-kind support of up to S$175,000, while gaining access to Singapore's comprehensive ecosystem of resources. The competition will be launched on 1 August 2025 at the first International Summit on Sustainable Protein (ISSP, more details on ISSP in Annexe). There are plans to hold the competition annually, supporting up to 15 early-stage global and local startups over a five-year period.

Professor Zhou Weibiao, Acting Director of the Bezos Centre for Sustainable Protein at NUS and Head of the NUS Department of Food Science and Technology, said, “One of our goals at the Bezos Centre for Sustainable Protein at NUS is to bridge world-class research in sustainable protein with real-world impact. This startup competition and the ISSP are natural extensions of that mission — paving the way for promising technologies to move from lab to market, and from local to global impact. Our partnership with EnterpriseSG will further support our aspiration by accelerating innovation and strengthening the sustainable protein ecosystem in Singapore and beyond.”  

Ms Jeannie Lim, Assistant Managing Director for Services and Growth Enterprises at EnterpriseSG, said, “Our partnership with the Bezos Centre @NUS comes at a crucial time to support promising foodtech startups in securing funding to develop their technologies and bring innovative solutions to market. This initiative reflects our commitment to advance promising sustainable food innovations while catalysing greater private sector participation and investments in Singapore's foodtech sector. We welcome foodtech startups to leverage Singapore’s resources, robust innovation ecosystem and progressive regulatory framework to accelerate their R&D efforts and scale their impact across the region.”

Bezos Centre @NUS x EnterpriseSG’s Sustainable Protein Startup Competition

The competition aims to support early-stage startups developing novel sustainable protein technologies that can address critical challenges in food security and climate resilience. This includes innovative solutions in fermentation, cultivated meat, plant-based proteins, and food safety & toxicology. Startups will be evaluated on their technology innovation and readiness, market potential and validation, business model scalability, and team capabilities.

From the entries, eight promising startups will advance to the final pitching round which will be held at the Singapore International Agri-Food Week in November 2025. Three winning startups will be selected, and each winner will receive seed funding and in-kind support of up to S$175,000. Bezos Centre @NUS will provide a S$75,000 cash grant and EnterpriseSG will match this with a S$75,000 Startup SG Founder (Competition) grant. Winners can utilise the seed funding for R&D, prototype development and to conduct pilot trials for sustainable protein products. Besides the funding, winners will receive up to S$25,000 worth of in-kind support from Bezos Centre @NUS. This includes mentorship, laboratory facilities, office space, startup incubation resources, and the opportunity to pitch at an annual startup event.

Interested startups can submit their applications from 1 August to 8 September 2025. More information on the Sustainable Protein Startup Competition is available at https://go.gov.sg/spsc-2025.  

Support for growth-stage sustainable protein startups

As part of its US$3 million funding to accelerate sustainable protein solutions, Bezos Centre @NUS has earmarked up to US$1.5 million to help growth-stage sustainable protein startups and high-potential projects scale up production, strengthen market presence, and drive international growth.

Bezos Centre @NUS plans to award grants of up to US$500,000 each to three promising startups or high-potential projects. EnterpriseSG will be partnering with Bezos Centre @NUS to further enhance this support through the Startup SG Tech Grant matching. More details will be announced in November this year.

Harvard appoints Rabbi Getzel Davis as inaugural director of interfaith engagement

Campus & Community

Harvard appoints Rabbi Getzel Davis as inaugural director of interfaith engagement

Rabbi Getzel Davis.

Rabbi Getzel Davis.

Niles Singer/Harvard Staff Photographer

Jacob Sweet

Harvard Staff Writer

7 min read

Presidential initiative will promote religious literacy and dialogue across faith and non-faith traditions

Among Harvard’s chaplaincy, Rabbi Getzel Davis has long been known as a bridge builder. From his internship at Harvard Hillel in 2012 to his service as a member of the executive committee of Harvard chaplains, Davis has created lasting relationships across religious, spiritual, and ethical organizations on campus.

Davis will now join the University staff as inaugural director of interfaith engagement, where he will lead programs to foster respect for diverse identities, build relationships among communities, and encourage cooperation for the common good. He sees the post as a natural continuation of his tenure at Harvard.

“I spent 12 years as a Harvard chaplain, and I learned a lot about all these other communities,” Davis said. “Not only did I build deeper relationships with them and run programming together, but I learned a lot about what they were struggling with and was often surprised that, in fact, we had a lot in common.”

In the new role, part of a presidential initiative on interfaith engagement, Davis will oversee projects that promote religious literacy and meaningful dialogue across diverse faith and non-faith traditions, and collaborate with University offices to advocate for the needs of religious and spiritual communities.

“Creating a community in which every person at Harvard can thrive means expanding opportunities for individuals to know, understand, and appreciate one another,” said President Alan M. Garber. “Rabbi Davis is a good listener and a great collaborator. His capacities for curiosity and compassion will shape our efforts to ensure that Harvard is a place where people can be themselves, express their views, and pursue their dreams both individually and collectively.”

Imam Khalil Abdur-Rashid and Rabbi Getzel Davis walking in Harvard Yard.

Imam Khalil Abdur-Rashid (left), Harvard’s Muslim chaplain, called the appointment of Davis to his new role “a win for Harvard, a win for the chaplains, and a win for our students.”

File photo by Veasey Conway/Harvard Staff Photographer

Davis brings with him deep relationships with many of Harvard’s chaplains, including Imam Khalil Abdur-Rashid, Harvard’s Muslim chaplain, who expressed excitement about Davis’ appointment and the new role. “To have someone in the Office of the President that is devoted to fostering interfaith programming is innovative, strategic, and forward-looking,” he said. “I think his presence as director of interfaith engagement is a win for Harvard, a win for the chaplains, and a win for our students.”

The work has already begun. In the coming semester, Davis will launch the First-Year Religious Ethical and Spiritual Life Fellowship, a paid 10-session program that helps students develop the skills to navigate complex differences and combat religious prejudice, antisemitism, and Islamophobia. At the end of the program, students will have the opportunity to apply for grants to foster their own interfaith initiatives on campus.

Davis is also collaborating with the office of the College dean of students to provide programming for pre-orientation and orientation to help promote pluralism and mutual understanding.

These new projects will run alongside existing programming, including Interfaith PhotoVoice — an exhibit of photos and stories that reflect student perspectives on religion, ethics, and spirituality — and Pluralism Passports, a series of interfaith events and programs that help Harvard community members learn about religious, ethical, and spiritual communities outside their own. Additional programs, administered by Davis and multifaith engagement fellow Abby McElroy, will begin throughout the academic year.

Other chaplains joined Abdur-Rashid in praising Davis as the right leader for the role.

“Getzel is a leader of deep humanity who has already spent years working hard to build closer, more mutually respectful relationships at Harvard, between religious groups that would undoubtedly have been more at odds with one another if not for his presence,” said Harvard Humanist Chaplain Greg Epstein. “In my particular case, I can say he has also been a wonderful champion of friendship and understanding between religious and nonreligious communities.”

Tammy McLeod, president of the Harvard Chaplains and a staff member of the interdenominational Christian organization Cru, also spoke to Davis’s ability to lead across difference. “Within the Harvard Chaplains, he has been a dedicated advocate for cultivating genuine relationships across diverse belief systems,” said McLeod. “Warm, personable, and deeply committed to life’s enduring questions, Getzel brings a unique presence to Harvard’s spiritual and ethical landscape. Students will find great value in engaging with him. His new position is not only timely — it is vital.”

Rabbi Jason Rubenstein, executive director of Harvard Hillel, echoed that sentiment.

“Of the many people I have worked with and observed in higher education, none is a better exemplar of assiduously cultivating relationships with colleagues across difference. … I cannot imagine a better fit, or more urgent work, than his new role of stitching together the different strands of Harvard’s communal tapestry into a more unified, humane, and interconnected whole.”

Davis lives with his wife, Leah Rosenberg, a physician at Massachusetts General Hospital and an assistant professor of medicine at Harvard Medical School, and three children in Cambridge. At Brandeis University, he majored in Near Eastern and Judaic studies, with a minor in comparative religion, before attending Hebrew College, a pluralistic rabbinical school in Newton. He first joined Harvard Hillel as an intern, advising the reform and conservative minyans on campus. In 2015, he became Harvard Hillel’s director of graduate programming and chair of University Programs for Harvard Chaplains.

In the latter role, he aimed to strengthen relationships among more than 40 chaplains from more than 30 religious and ethical traditions. Davis recalls meeting in a different chaplaincy every month, giving different groups opportunities to share their triumphs and struggles.

Aside from formal programming, Davis and other chaplains hosted meals open to students to discuss essential questions of faith, meaning, and collaboration on campus. He also changed the format of chaplain meetings to build time for one-on-one conversations and in-person gatherings.

“I find a lot of the way I encounter the sacred is to be in relationship with other people,” said Davis, who became campus rabbi in 2023. “And some of that has been by developing deep and trusting relationships with the other chaplains.”

The deep bonds with other religious leaders, including Abdur-Rashid, led to joint events between Harvard Hillel and other groups like the Harvard Islamic Society. Davis cited the “Sukkat Salaam” dinner as one of many successful collaborations — an event that celebrated the start of the Jewish holiday Sukkot and the close of Ramadan, the Islamic month of fasting.

The relationship between Davis and Abdur-Rashid proved valuable following the events of Oct. 7, 2023, as Jewish and Muslim students navigated complex emotional and community responses to the attack on Israel and the Gaza war.

In December 2023, the two held their first of three vigils, praying together for peace for all those affected by the conflict. “They felt very important, symbolically, to be done on campus,” Davis said. “It felt like a very big deal.”

This experience of bringing communities together during a particularly challenging time reinforced Davis’ belief in a more structured approach to interfaith work on campus. After leaving Hillel in March 2025 to regroup and spend time with his family, Davis continued thinking about the connections he had formed with other chaplains, imagining a new role that would allow him to establish programming for an even wider and more diverse community.

“That time of reflection gave me the clarity to see that the bridge-building work we did at Hillel was precisely what the entire campus needed,” Davis said. “I used that period to meet with chaplains, administrators, and students to develop a concrete vision for how Harvard could foster true pluralism. This collective vision is what the University has now entrusted me to advance.”

After more than a decade at the University, Davis is thrilled to be stepping into the inaugural role and an initiative that he expects to grow in years to come. “This new role feels like the culmination of my entire career here,” he said. “I am honored and energized to answer this call to serve the whole Harvard community.”

Mapping cells in time and space: New tool reveals a detailed history of tumor growth

All life is connected in a vast family tree. Every organism exists in relationship to its ancestors, descendants, and cousins, and the path between any two individuals can be traced. The same is true of cells within organisms — each of the trillions of cells in the human body is produced through successive divisions from a fertilized egg, and can all be related to one another through a cellular family tree. In simpler organisms, such as the worm C. elegans, this cellular family tree has been fully mapped, but the cellular family tree of a human is many times larger and more complex.

In the past, MIT professor and Whitehead Institute for Biomedical Research member Jonathan Weissman and other researchers developed lineage tracing methods to track and reconstruct the family trees of cell divisions in model organisms in order to understand more about the relationships between cells and how they assemble into tissues, organs, and — in some cases — tumors. These methods could help to answer many questions about how organisms develop and diseases like cancer are initiated and progress.

Now, Weissman and colleagues have developed an advanced lineage tracing tool that not only captures an accurate family tree of cell divisions, but also combines that with spatial information: identifying where each cell ends up within a tissue. The researchers used their tool, PEtracer, to observe the growth of metastatic tumors in mice. Combining lineage tracing and spatial data provided the researchers with a detailed view of how elements intrinsic to the cancer cells and from their environments influenced tumor growth, as Weissman and postdocs in his lab Luke Koblan, Kathryn Yost, and Pu Zheng, and graduate student William Colgan share in a paper published in the journal Science on July 24.

“Developing this tool required combining diverse skill sets through the sort of ambitious interdisciplinary collaboration that’s only possible at a place like Whitehead Institute,” says Weissman, who is also a Howard Hughes Medical Institute investigator. “Luke came in with an expertise in genetic engineering, Pu in imaging, Katie in cancer biology, and William in computation, but the real key to their success was their ability to work together to build PEtracer.”

“Understanding how cells move in time and space is an important way to look at biology, and here we were able to see both of those things in high resolution. The idea is that by understanding both a cell’s past and where it ends up, you can see how different factors throughout its life influenced its behaviors. In this study, we use these approaches to look at tumor growth, though in principle we can now begin to apply these tools to study other biology of interest, like embryonic development,” Koblan says.

Designing a tool to track cells in space and time

PEtracer tracks cells’ lineages by repeatedly adding short, predetermined codes to the DNA of cells over time. Each piece of code, called a lineage tracing mark, is made up of five bases, the building blocks of DNA. These marks are inserted using a gene editing technology called prime editing, which directly rewrites stretches of DNA with minimal undesired byproducts. Over time, each cell acquires more lineage tracing marks, while also maintaining the marks of its ancestors. The researchers can then compare cells’ combinations of marks to figure out relationships and reconstruct the family tree.

“We used computational modeling to design the tool from first principles, to make sure that it was highly accurate, and compatible with imaging technology. We ran many simulations to land on the optimal parameters for a new lineage tracing tool, and then engineered our system to fit those parameters,” Colgan says.

When the tissue — in this case, a tumor growing in the lung of a mouse — had sufficiently grown, the researchers collected these tissues and used advanced imaging approaches to look at each cell’s lineage relationship to other cells via the lineage tracing marks, along with its spatial position within the imaged tissue and its identity (as determined by the levels of different RNAs expressed in each cell). PEtracer is compatible with both imaging approaches and sequencing methods that capture genetic information from single cells.

“Making it possible to collect and analyze all of this data from the imaging was a large challenge,” Zheng says. “What’s particularly exciting to me is not just that we were able to collect terabytes of data, but that we designed the project to collect data that we knew we could use to answer important questions and drive biological discovery.”

Reconstructing the history of a tumor

Combining the lineage tracing, gene expression, and spatial data let the researchers understand how the tumor grew. They could tell how closely related neighboring cells are and compare their traits. Using this approach, the researchers found that the tumors they were analyzing were made up of four distinct modules, or neighborhoods, of cells.

The tumor cells closest to the lung, the most nutrient-dense region, were the most fit, meaning their lineage history indicated the highest rate of cell division over time. Fitness in cancer cells tends to correlate to how aggressively tumors will grow.

The cells at the “leading edge” of the tumor, the far side from the lung, were more diverse and not as fit. Below the leading edge was a low-oxygen neighborhood of cells that might once have been leading edge cells, now trapped in a less-desirable spot. Between these cells and the lung-adjacent cells was the tumor core, a region with both living and dead cells, as well as cellular debris.

The researchers found that cancer cells across the family tree were equally likely to end up in most of the regions, with the exception of the lung-adjacent region, where a few branches of the family tree dominated. This suggests that the cancer cells’ differing traits were heavily influenced by their environments, or the conditions in their local neighborhoods, rather than their family history. Further evidence of this point was that expression of certain fitness-related genes, such as Fgf1/Fgfbp1, correlated to a cell’s location, rather than its ancestry. However, lung-adjacent cells also had inherited traits that gave them an edge, including expression of the fitness-related gene Cldn4­ — showing that family history influenced outcomes as well.

These findings demonstrate how cancer growth is influenced both by factors intrinsic to certain lineages of cancer cells and by environmental factors that shape the behavior of cancer cells exposed to them.

“By looking at so many dimensions of the tumor in concert, we could gain insights that would not have been possible with a more limited view,” Yost says. “Being able to characterize different populations of cells within a tumor will enable researchers to develop therapies that target the most aggressive populations more effectively.”

“Now that we’ve done the hard work of designing the tool, we’re excited to apply it to look at all sorts of questions in health and disease, in embryonic development, and across other model species, with an eye toward understanding important problems in human health,” Koblan says. “The data we collect will also be useful for training AI models of cellular behavior. We’re excited to share this technology with other researchers and see what we all can discover.”

© Image courtesy of Science Online.

Each tumor, represented by a bar, is made up of distinct groups of cell populations, represented by color.

Creeping crystals: Scientists observe “salt creep” at the single-crystal scale

Salt creeping, a phenomenon that occurs in both natural and industrial processes, describes the collection and migration of salt crystals from evaporating solutions onto surfaces. Once they start collecting, the crystals climb, spreading away from the solution. This creeping behavior, according to researchers, can cause damage or be harnessed for good, depending on the context. New research published June 30 in the journal Langmuir is the first to show salt creeping at a single-crystal scale and beneath a liquid’s meniscus.

“The work not only explains how salt creeping begins, but why it begins and when it does,” says Joseph Phelim Mooney, a postdoc in the MIT Device Research Laboratory and one of the authors of the new study. “We hope this level of insight helps others, whether they’re tackling water scarcity, preserving ancient murals, or designing longer-lasting infrastructure.”

The work is the first to directly visualize how salt crystals grow and interact with surfaces underneath a liquid meniscus, something that’s been theorized for decades but never actually imaged or confirmed at this level, and it offers fundamental insights that could impact a wide range of fields — from mineral extraction and desalination to anti-fouling coatings, membrane design for separation science, and even art conservation, where salt damage is a major threat to heritage materials.

In civil engineering applications, for example, the research can help explain why and when salt crystals start growing across surfaces like concrete, stone, or building materials. “These crystals can exert pressure and cause cracking or flaking, reducing the long-term durability of structures,” says Mooney. “By pinpointing the moment when salt begins to creep, engineers can better design protective coatings or drainage systems to prevent this form of degradation.”

For a field like art conservation, where salt can be devastating to murals, frescoes, and ancient artifacts, often forming beneath the surface before visible damage appears, the work can help identify the exact conditions that cause salt to start moving and spreading, allowing conservators to act earlier and more precisely to protect heritage objects.

The work began during Mooney’s Marie Curie Fellowship at MIT. “I was focused on improving desalination systems and quickly ran into [salt buildup as] a major roadblock,” he says. “[Salt] was everywhere, coating surfaces, clogging flow paths, and undermining the efficiency of our designs. I realized we didn’t fully understand how or why salt starts creeping across surfaces in the first place.”

That experience led Mooney to team up with colleagues to dig into the fundamentals of salt crystallization at the air–liquid–solid interface. “We wanted to zoom in, to really see the moment salt begins to move, so we turned to in situ X-ray microscopy,” he says. “What we found gave us a whole new way to think about surface fouling, material degradation, and controlled crystallization.”

The new research may, in fact, allow better control of a crystallization processes required to remove salt from water in zero-liquid discharge systems. It can also be used to explain how and when scaling happens on equipment surfaces, and may support emerging climate technologies that depend on smart control of evaporation and crystallization.

The work also supports mineral and salt extraction applications, where salt creeping can be both a bottleneck and an opportunity. In these applications, Mooney says, “by understanding the precise physics of salt formation at surfaces, operators can optimize crystal growth, improving recovery rates and reducing material losses.”

Mooney’s co-authors on the paper include fellow MIT Device Lab researchers Omer Refet Caylan, Bachir El Fil (now an associate professor at Georgia Tech), and Lenan Zhang (now an associate professor at Cornell University); Jeff Punch and Vanessa Egan of the University of Limerick; and Jintong Gao of Cornell.

The research was conducted using in situ X-ray microscopy. Mooney says the team’s big realization moment occurred when they were able to observe a single salt crystal pinning itself to the surface, which kicked off a cascading chain reaction of growth.

“People had speculated about this, but we captured it on X-ray for the first time. It felt like watching the microscopic moment where everything tips, the ignition points of a self-propagating process,” says Mooney. “Even more surprising was what followed: The salt crystal didn’t just grow passively to fill the available space. It pierced through the liquid-air interface and reshaped the meniscus itself, setting up the perfect conditions for the next crystal. That subtle, recursive mechanism had never been visually documented before — and seeing it play out in real time completely changed how we thought about salt crystallization.”

The paper, “In Situ X-ray Microscopy Unraveling the Onset of Salt Creeping at a Single-Crystal Level,” is available now in the journal Langmuir. Research was conducted in MIT.nano. 

© Image courtesy of the researchers.

New research from the MIT Device Lab is the first to show salt creeping at a single-crystal scale and beneath the liquid’s meniscus.

Creeping crystals: Scientists observe “salt creep” at the single-crystal scale

Salt creeping, a phenomenon that occurs in both natural and industrial processes, describes the collection and migration of salt crystals from evaporating solutions onto surfaces. Once they start collecting, the crystals climb, spreading away from the solution. This creeping behavior, according to researchers, can cause damage or be harnessed for good, depending on the context. New research published June 30 in the journal Langmuir is the first to show salt creeping at a single-crystal scale and beneath a liquid’s meniscus.

“The work not only explains how salt creeping begins, but why it begins and when it does,” says Joseph Phelim Mooney, a postdoc in the MIT Device Research Laboratory and one of the authors of the new study. “We hope this level of insight helps others, whether they’re tackling water scarcity, preserving ancient murals, or designing longer-lasting infrastructure.”

The work is the first to directly visualize how salt crystals grow and interact with surfaces underneath a liquid meniscus, something that’s been theorized for decades but never actually imaged or confirmed at this level, and it offers fundamental insights that could impact a wide range of fields — from mineral extraction and desalination to anti-fouling coatings, membrane design for separation science, and even art conservation, where salt damage is a major threat to heritage materials.

In civil engineering applications, for example, the research can help explain why and when salt crystals start growing across surfaces like concrete, stone, or building materials. “These crystals can exert pressure and cause cracking or flaking, reducing the long-term durability of structures,” says Mooney. “By pinpointing the moment when salt begins to creep, engineers can better design protective coatings or drainage systems to prevent this form of degradation.”

For a field like art conservation, where salt can be devastating to murals, frescoes, and ancient artifacts, often forming beneath the surface before visible damage appears, the work can help identify the exact conditions that cause salt to start moving and spreading, allowing conservators to act earlier and more precisely to protect heritage objects.

The work began during Mooney’s Marie Curie Fellowship at MIT. “I was focused on improving desalination systems and quickly ran into [salt buildup as] a major roadblock,” he says. “[Salt] was everywhere, coating surfaces, clogging flow paths, and undermining the efficiency of our designs. I realized we didn’t fully understand how or why salt starts creeping across surfaces in the first place.”

That experience led Mooney to team up with colleagues to dig into the fundamentals of salt crystallization at the air–liquid–solid interface. “We wanted to zoom in, to really see the moment salt begins to move, so we turned to in situ X-ray microscopy,” he says. “What we found gave us a whole new way to think about surface fouling, material degradation, and controlled crystallization.”

The new research may, in fact, allow better control of a crystallization processes required to remove salt from water in zero-liquid discharge systems. It can also be used to explain how and when scaling happens on equipment surfaces, and may support emerging climate technologies that depend on smart control of evaporation and crystallization.

The work also supports mineral and salt extraction applications, where salt creeping can be both a bottleneck and an opportunity. In these applications, Mooney says, “by understanding the precise physics of salt formation at surfaces, operators can optimize crystal growth, improving recovery rates and reducing material losses.”

Mooney’s co-authors on the paper include fellow MIT Device Lab researchers Omer Refet Caylan, Bachir El Fil (now an associate professor at Georgia Tech), and Lenan Zhang (now an associate professor at Cornell University); Jeff Punch and Vanessa Egan of the University of Limerick; and Jintong Gao of Cornell.

The research was conducted using in situ X-ray microscopy. Mooney says the team’s big realization moment occurred when they were able to observe a single salt crystal pinning itself to the surface, which kicked off a cascading chain reaction of growth.

“People had speculated about this, but we captured it on X-ray for the first time. It felt like watching the microscopic moment where everything tips, the ignition points of a self-propagating process,” says Mooney. “Even more surprising was what followed: The salt crystal didn’t just grow passively to fill the available space. It pierced through the liquid-air interface and reshaped the meniscus itself, setting up the perfect conditions for the next crystal. That subtle, recursive mechanism had never been visually documented before — and seeing it play out in real time completely changed how we thought about salt crystallization.”

The paper, “In Situ X-ray Microscopy Unraveling the Onset of Salt Creeping at a Single-Crystal Level,” is available now in the journal Langmuir. Research was conducted in MIT.nano. 

© Image courtesy of the researchers.

New research from the MIT Device Lab is the first to show salt creeping at a single-crystal scale and beneath the liquid’s meniscus.

From tragedy to ‘Ecstasy’

Ivy Pochoda

Ivy Pochoda.

Photo by Darran Tiernan

Arts & Culture

From tragedy to ‘Ecstasy’

Ivy Pochoda’s feminist retelling of ‘The Bacchae’ examines freedom from inhibition with Electronic Dance Music beat

Anna Lamb

Harvard Staff Writer

5 min read

King Pentheus of Thebes and his mother, Agave, become the target of the god Dionysus’ wrath for rejecting his sybaritic cult in the ancient Greek tragedy “The Bacchae.”

book cover of Ecstasy by Ivy Pochoda

In “Ecstasy,” Ivy Pochoda’s new feminist retelling, Dionysus is an international DJ with a cult following in the Electronic Dance Music, or EDM, and rave scene. Pentheus and Agave become Drew and his mother, Lena — heir and widow to a deceased hotel magnate opening a new luxury resort on a Greek island.

It’s a bloody story in the old and new, rife with decadence and depravity — one with timeless appeal judging from the multitude of stagings and adaptations over the centuries.

For Pochoda, the new project additionally marks a return to an early love — and an earlier self.

“I did Latin and Greek in middle and high school, and I was really good at it,” said Pochoda, a 1998 graduate in classics and literature. “And one of the reasons I wanted to go to Harvard was because of their classics department.”

Raised in Brooklyn, Pochoda attended high school at St. Ann’s — a private school with no grades, no set curriculum, and a philosophy of being “systematically asystematic.” One year, her teachers led the class through a translation of Ovid’s “Metamorphosis.” During another they spent the entire year translating Euripides.

“I spent my senior year in high school translating ‘The Bacchae,’” Pochoda said. “We did it start to finish, and it was really a cool experience for a 17-year-old to get that immersed in a text. And it was never really far from my brain.”

But in College, Pochoda said, it was hard to immerse herself in ancient stories in the same way.

“I found out in College that being interested in classics and being interested in mythology are not the same thing,” she said. “When I was in high school, it sort of was — we were able to overlap.”

Pochoda said it seemed to her that having a concentration in classics meant translating — like, all the time.

She wanted to spend more time discussing meaning and themes, the part of ancient storytelling that brought her joy. That’s why, halfway through her undergraduate study, Pochoda decided she would switch concentrations.

“And there was this concentration called classics and secondary fields, which was not meant to be combined with English. But I did it, and I combined it through the study of dramatic literature, which brought me back to ‘The Bacchae’ and plays that I love.”

“It took me back to where I started from, which is being academic, but also creative, and applying that academia to performance and to things that are just a little off the beaten path.”

To fulfill the novel requirements set forth by combining literature and classics, Pochoda began taking classes at American Repertory Theater, alongside creative writing courses. She reminisces fondly about her classes with Professor Emeritus Robert Brustein and associate Robert Scanlan.

“It took me back to where I started from, which is being academic, but also creative, and applying that academia to performance and to things that are just a little off the beaten path,” she said. “Being an undergraduate and taking classes with students in the arts and working with the art professors and actually thinking about why I was studying Greek and why I was studying English literature through a dramatic focus, was a really interesting tunnel.”

The setting of “Ecstasy” is far from the ivy-covered buildings of Cambridge, or even the metropolis of Los Angeles, where she lives now with her 10-year-old-daughter, but Pochoda said there is real life inspiration at play.

“Ecstasy” is set largely on the island of Naxos — a destination to which she took a trip in 2018 when working on the “Epoca” series with Kobe Bryant.

In addition to her real-life island retreat, Pochoda has also dabbled in the world of EDM. In her previous life as captain of the women’s squash team at Harvard, followed by nine years playing professionally in Europe, Pochoda got out her fair share.

“I’m not some super hardcore EDM person, but I do know about it. I mean, I’ve been to some raves and parties, which was a problem for me academically,” she said, laughing. “I will talk about it openly,” she added.

As for the decision to transpose this culture onto that of the ancient Greek god known for his love of wine and sex and revelry, Pochoda said that was easy.

“When I was thinking about what’s going on in that play, those women are raving for all intents and purposes.”

“When I was thinking about what’s going on in that play, those women are raving for all intents and purposes,” she said. “In the early EDM, early trance parties, early underground music, there was a lot of suspicion of what was going on and a lot of worry that the music was making you crazy and the drugs were making you crazy. So in the book, I try to use the idea of a beat, or beats, and the build-ups of EDM.”

But to be clear, Pochoda said, this is not quite a cautionary tale.

“The main characters, they want to go to the beach and party their faces off and reconnect with their youthful exuberance and the permissiveness of youth — the permissiveness of women being allowed to do what they want to do without men telling them what they want to do, what they can’t do,” she said. “But there is a dark side to that.”

More than a simulation: SMUN 2025 prepares future leaders for global realities

More than 400 students from over 20 pre-tertiary and tertiary institutions across Singapore and the Asia-Pacific region convened to debate critical global concerns at the 22nd Singapore Model United Nations (SMUN) conference that was organised by the NUS Political Science Society (NUS PSSOC) in June at NUS University Town (UTown).

Themed “Building a New Era of Diplomacy”, SMUN 2025 provided an incisive lens through which young participants navigated the intricate complexities of global geopolitics by role-playing as UN delegates and members of international organisations such as the International Monetary Fund and the World Trade Organisation to discuss current issues such as digital currencies and tariffs.

A key highlight of the event was the thought-provoking keynote by distinguished diplomat and academic Mr Kishore Mahbubani, Distinguished Fellow at the NUS Asia Research Institute. In his address, Mr Mahbubani explored the shifting paradigms of global power and offered compelling insights into the current international order. He highlighted the remarkable success of ASEAN as a testament to effective regional cooperation and presented it as a model for navigating complex global challenges.

Mr Mahbubani also critically examined the structure of the UN, particularly the influence of the permanent members’ veto powers. While acknowledging that the UN’s effectiveness is often shaped by the interests of the major powers, he emphasised that it remains in the interest of small states to strengthen the multilateral system by “build[ing] upon a rules-based order and cooperat[ing] in issues of global importance such as climate change and denuclearisation.” His remarks struck a chord with the young delegates, reinforcing their understanding of the importance of multilateralism and the pursuit of shared global interests to shape a more equitable world. 

Diverse agendas and dynamic debates: Inside the councils of SMUN 2025

This year’s conference featured delegates from 11 councils tackling a wide array of pertinent topics. In the United Nations Security Council, discussions centred on reforming peacekeeping operations – with delegates debating their effectiveness, future direction and the complexities of maintaining peace in conflicts such as the Syrian Civil War. The United Nations Economic Commission for Europe (UNECE) engaged in intricate negotiations on transboundary water treaties across the Pan-European region, a critical issue with far-reaching environmental and geopolitical implications. Meanwhile, the House of Commons (HOC) simulation offered a unique platform to examine the ongoing complexities and multifaceted consequences of Brexit, giving participants the opportunity to explore national political dynamics within an international context. 

Through extensive research, preparation of detailed position papers, and the development of innovative policy proposals, delegates engaged in intense debates, striving to pass resolutions through collaborative effort and majority vote.

As with previous iterations, SMUN 2025 was meticulously designed to spur active participation and meaningful dialogue. Participants who demonstrated significant contributions throughout the conference were duly recognised with prestigious awards such as Best Delegate, Outstanding Delegate, Honourable Mentions, and Best Position Paper, celebrating their dedication and diplomatic acumen.

Mr Arav Taneja from Temasek Secondary School received an honourable mention award for his outstanding contributions in the House of Representatives of Japan (HRJ). Recounting his experience, he shared, “When I first walked into the House of Representatives Japan SMUN conference room, I felt like a small fish in a big lake. However, as we progressed through council sessions, I found my voice. I engaged in constructive debate with other delegates and worked on resolutions like it was second nature. I gained invaluable knowledge not only about Japan but also about global dynamics. However, I think the biggest takeaway was the connections and friendships I made along the way.”

PSSOC President and Year 2 student from the NUS Faculty of Arts and Social Sciences Ms Irdina Duran, expressed her thanks to all involved for the success of SMUN 2025. “We are immensely grateful to Mr Mahbubani for his profound insights and to all the dedicated participants who made this year’s conference an undeniable success.”

SMUN 2025 Secretary-General and first-year student from the NUS Faculty of Law Mr Aditya Garladinne noted that this year's event has proven to be a vibrant crucible of intellect and diplomacy. Far more than a mere simulation, it is a testament to the boundless potential of our youth, he said.

“Witnessing these bright minds grappling with the world's most pressing challenges is not just inspiring; it is a profound reassurance that the future of global cooperation rests in exceptionally capable hands. This event truly exemplifies how experiential learning can cultivate not only future leaders but also compassionate and globally aware citizens." 

By the NUS Political Science Society at the NUS Faculty of Arts and Social Sciences

Harmony in diversity: International friendships flourish at YST

International Friendship Day, observed annually on 30 July, celebrates friendships between people of different countries and cultures to cultivate a shared sense of solidarity and promote dialogue to bridge differences.

As a global university, NUS nurtures students into global citizens through overseas opportunities, collaborations with foreign universities, and immersion in a cosmopolitan community that has 123 nationalities represented among staff and students. This diversity is especially prominent at the Yong Siew Toh Conservatory of Music (YST), which boasts one of the most culturally diverse student bodies at NUS. There are more than 20 nationalities in its undergraduate student population of about 240, creating a vibrant environment for friendships to blossom across different nationalities and cultures.

In celebration of International Friendship Day, five Voice majors hailing from various parts of Southeast Asia, South Korea, and India share how their multinational group of friends has enriched and shaped their university experience both inside and outside the classroom.

Embracing curiosity and kindness

Like many university friend groups, Val Chong, Samiksha Argal, Park Minjun, and Leanne Reese first met and bonded in shared core classes in Year 1. Representing Singapore, India, South Korea, and the Philippines respectively, they became acquainted through casual chats between lessons and grabbing meals together after classes and performances.

Notably, the friend group also comprises their major’s entire Class of 2026, as YST only takes in an average of three to five Voice students per year. Due to the small size of the cohort, Indonesian Jason Suryaatmaja (Class of 2025) would often fill in for missing parts in their group performances and soon joined their circle of friends, adding another nationality to the already diverse squad.

Making friends with people of other nationalities feels natural in YST, said Minjun: “We’re lucky because our faculty is one of the smallest at NUS with people from so many different backgrounds. It makes it easier to get close to people from other countries.”

Navigating cultural differences is a major part of the YST experience, especially when classes are led by an international faculty and exchanging feedback is an important part of the learning experience. Cultural differences can often impact how tone and intention are conveyed and perceived, and nuance can sometimes be lost when a thought is translated between languages.

The five friends overcame this challenge by embracing an attitude of curiosity towards cultures other than their own, as exemplified in a quote that Leanne learnt from a professor while on exchange in Scotland: “Don’t be interesting; be interested.” This mindset led them to gain a better understanding of each other’s colloquialisms and mannerisms, which differ across cultures and languages. Over time, they have developed a shared trust and now count the ability to exchange candid feedback among themselves as one of the most valuable benefits of their friend group.

“Val is an amazing performer, so when we are rehearsing for productions, I will ask her, ‘Do you think this works?’ I ask Minjun for technical advice and we explore singing techniques together. It has really helped me on the career side,” Jason said, prompting some good-natured ribbing from his friends that they are only useful to him as career support.

The supportive environment they have nurtured in their group also extends to their interactions with students from other cohorts, such as during weekly Voice studio classes. Val shared that they make it a point to chat with their juniors during these shared classes and cheer them on even when they make mistakes.

Said Samiksha: “Through the years, we have created the kind of environment that we want for ourselves and others – a warm and nurturing one where everyone can grow and learn together.”

Growth lies outside the comfort zone

While cultural differences can pose challenges at times, they create positive opportunities for growth and connection too. As a Singaporean, Val takes on the responsibility of helping the others feel at home, introducing them to the Chinese New Year tradition of yusheng and taking them café-hopping to explore the country.

“As a local, it is easy to stick to our own cliques, but it is also important to make our fellow international students feel welcome and ‘at home’. We can invite them out for a meal, or even a simple greeting could make their day,” she said.

“YST is so diverse that we’re compelled to integrate, and honestly that pushed me out of my comfort zone as an introvert. Talking to people from different cultures, you’ll learn so much about them and the world in general,” said Samiksha, who recalls learning the entire history of Korea from Minjun, starting from the Stone Age to the present day.

Val noted that her friendships with people whose countries are undergoing crises make her more informed about what is happening overseas and prompt her to empathise.

She added that such interactions can also serve as an important first step in countering racial and cultural stereotypes. “Stereotypes often arise from preconceived notions about others. But when we take the time to talk to others and understand them, we start to realise that we need to correct our assumptions.”

Forming friendships with people of other cultures is crucial for foreign students who do not have family in Singapore, said Leanne, who has heard of students leaving due to loneliness. “It’s really important to have someone who can listen and be there for you,” she said. “There were tough times when school got hectic and stressful, but it’s the friends I’ve made that supported me through my dark times and helped me emerge stronger.”

Ultimately, international friendships are not so different from friendships among people of the same culture, the students said. Social media makes it easier than ever to find common topics like pop culture and memes that can serve as a starting point for conversations, and once a connection is established, deeper shared interests can be discovered to strengthen the friendship.

The group’s favourite stress-busting activity is cycling to West Coast Park, where they would sit by the sea and talk about life, and some of their best memories are from post-concert suppers at the row of eateries along Clementi Road known to NUS students as “Supper Stretch”. “We always eat at the same three places, and I don’t know why we’re not bored of the food yet, but it’s just a lot of fun,” said Samiksha.

Said Minjun: “Most of us are living and studying here alone, but having these friends to journey alongside us makes life a lot more fun and exciting.”

New algorithms enable efficient machine learning with symmetric data

If you rotate an image of a molecular structure, a human can tell the rotated image is still the same molecule, but a machine-learning model might think it is a new data point. In computer science parlance, the molecule is “symmetric,” meaning the fundamental structure of that molecule remains the same if it undergoes certain transformations, like rotation.

If a drug discovery model doesn’t understand symmetry, it could make inaccurate predictions about molecular properties. But despite some empirical successes, it’s been unclear whether there is a computationally efficient method to train a good model that is guaranteed to respect symmetry.

A new study by MIT researchers answers this question, and shows the first method for machine learning with symmetry that is provably efficient in terms of both the amount of computation and data needed.

These results clarify a foundational question, and they could aid researchers in the development of more powerful machine-learning models that are designed to handle symmetry. Such models would be useful in a variety of applications, from discovering new materials to identifying astronomical anomalies to unraveling complex climate patterns.

“These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” says Behrooz Tahmasebi, an MIT graduate student and co-lead author of this study.

He is joined on the paper by co-lead author and MIT graduate student Ashkan Soleymani; Stefanie Jegelka, an associate professor of electrical engineering and computer science (EECS) and a member of the Institute for Data, Systems, and Society (IDSS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Patrick Jaillet, the Dugald C. Jackson Professor of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research was recently presented at the International Conference on Machine Learning.

Studying symmetry

Symmetric data appear in many domains, especially the natural sciences and physics. A model that recognizes symmetries is able to identify an object, like a car, no matter where that object is placed in an image, for example.

Unless a machine-learning model is designed to handle symmetry, it could be less accurate and prone to failure when faced with new symmetric data in real-world situations. On the flip side, models that take advantage of symmetry could be faster and require fewer data for training.

But training a model to process symmetric data is no easy task.

One common approach is called data augmentation, where researchers transform each symmetric data point into multiple data points to help the model generalize better to new data. For instance, one could rotate a molecular structure many times to produce new training data, but if researchers want the model to be guaranteed to respect symmetry, this can be computationally prohibitive.

An alternative approach is to encode symmetry into the model’s architecture. A well-known example of this is a graph neural network (GNN), which inherently handles symmetric data because of how it is designed.

“Graph neural networks are fast and efficient, and they take care of symmetry quite well, but nobody really knows what these models are learning or why they work. Understanding GNNs is a main motivation of our work, so we started with a theoretical evaluation of what happens when data are symmetric,” Tahmasebi says.

They explored the statistical-computational tradeoff in machine learning with symmetric data. This tradeoff means methods that require fewer data can be more computationally expensive, so researchers need to find the right balance.

Building on this theoretical evaluation, the researchers designed an efficient algorithm for machine learning with symmetric data.

Mathematical combinations

To do this, they borrowed ideas from algebra to shrink and simplify the problem. Then, they reformulated the problem using ideas from geometry that effectively capture symmetry.

Finally, they combined the algebra and the geometry into an optimization problem that can be solved efficiently, resulting in their new algorithm.

“Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” Tahmasebi says.

The algorithm requires fewer data samples for training than classical approaches, which would improve a model’s accuracy and ability to adapt to new applications.

By proving that scientists can develop efficient algorithms for machine learning with symmetry, and demonstrating how it can be done, these results could lead to the development of new neural network architectures that could be more accurate and less resource-intensive than current models.

Scientists could also use this analysis as a starting point to examine the inner workings of GNNs, and how their operations differ from the algorithm the MIT researchers developed.

“Once we know that better, we can design more interpretable, more robust, and more efficient neural network architectures,” adds Soleymani.

This research is funded, in part, by the National Research Foundation of Singapore, DSO National Laboratories of Singapore, the U.S. Office of Naval Research, the U.S. National Science Foundation, and an Alexander von Humboldt Professorship.

© Credit: iStock, MIT News

A new study by MIT researchers shows the first method for machine learning with symmetry that is provably efficient in terms of both the amount of computation and data needed.

“FUTURE PHASES” showcases new frontiers in music technology and interactive performance

Music technology took center stage at MIT during “FUTURE PHASES,” an evening of works for string orchestra and electronics, presented by the MIT Music Technology and Computation Graduate Program as part of the 2025 International Computer Music Conference (ICMC). 

The well-attended event was held last month in the Thomas Tull Concert Hall within the new Edward and Joyce Linde Music Building. Produced in collaboration with the MIT Media Lab’s Opera of the Future Group and Boston’s self-conducted chamber orchestra A Far Cry, “FUTURE PHASES” was the first event to be presented by the MIT Music Technology and Computation Graduate Program in MIT Music’s new space.

“FUTURE PHASES” offerings included two new works by MIT composers: the world premiere of “EV6,” by MIT Music’s Kenan Sahin Distinguished Professor Evan Ziporyn and professor of the practice Eran Egozy; and the U.S. premiere of “FLOW Symphony,” by the MIT Media Lab’s Muriel R. Cooper Professor of Music and Media Tod Machover. Three additional works were selected by a jury from an open call for works: “The Wind Will Carry Us Away,” by Ali Balighi; “A Blank Page,” by Celeste Betancur Gutiérrez and Luna Valentin; and “Coastal Portrait: Cycles and Thresholds,” by Peter Lane. Each work was performed by Boston’s own multi-Grammy-nominated string orchestra, A Far Cry.

“The ICMC is all about presenting the latest research, compositions, and performances in electronic music,” says Egozy, director of the new Music Technology and Computation Graduate Program at MIT. When approached to be a part of this year’s conference, “it seemed the perfect opportunity to showcase MIT’s commitment to music technology, and in particular the exciting new areas being developed right now: a new master’s program in music technology and computation, the new Edward and Joyce Linde Music Building with its enhanced music technology facilities, and new faculty arriving at MIT with joint appointments between MIT Music and Theater Arts (MTA) and the Department of Electrical Engineering and Computer Science (EECS).” These recently hired professors include Anna Huang, a keynote speaker for the conference and creator of the machine learning model Coconet that powered Google’s first AI Doodle, the Bach Doodle.

Egozy emphasizes the uniqueness of this occasion: “You have to understand that this is a very special situation. Having a full 18-member string orchestra [A Far Cry] perform new works that include electronics does not happen very often. In most cases, ICMC performances consist either entirely of electronics and computer-generated music, or perhaps a small ensemble of two-to-four musicians. So the opportunity we could present to the larger community of music technology was particularly exciting.”

To take advantage of this exciting opportunity, an open call was put out internationally to select the other pieces that would accompany Ziporyn and Egozy’s “EV6” and Machover’s “FLOW Symphony.” Three pieces were selected from a total of 46 entries to be a part of the evening’s program by a panel of judges that included Egozy, Machover, and other distinguished composers and technologists.

“We received a huge variety of works from this call,” says Egozy. “We saw all kinds of musical styles and ways that electronics would be used. No two pieces were very similar to each other, and I think because of that, our audience got a sense of how varied and interesting a concert can be for this format. A Far Cry was really the unifying presence. They played all pieces with great passion and nuance. They have a way of really drawing audiences into the music. And, of course, with the Thomas Tull Concert Hall being in the round, the audience felt even more connected to the music.”

Egozy continues, “we took advantage of the technology built into the Thomas Tull Concert Hall, which has 24 built-in speakers for surround sound allowing us to broadcast unique, amplified sound to every seat in the house. Chances are that every person might have experienced the sound slightly differently, but there was always some sense of a multidimensional evolution of sound and music as the pieces unfolded.”

The five works of the evening employed a range of technological components that included playing synthesized, prerecorded, or electronically manipulated sounds; attaching microphones to instruments for use in real-time signal processing algorithms; broadcasting custom-generated musical notation to the musicians; utilizing generative AI to process live sound and play it back in interesting and unpredictable ways; and audience participation, where spectators use their cellphones as musical instruments to become a part of the ensemble.

Ziporyn and Egozy’s piece, “EV6,” took particular advantage of this last innovation: “Evan and I had previously collaborated on a system called Tutti, which means ‘together’ in Italian. Tutti gives an audience the ability to use their smartphones as musical instruments so that we can all play together.” Egozy developed the technology, which was first used in the MIT Campaign for a Better World in 2017. The original application involved a three-minute piece for cellphones only. “But for this concert,” Egozy explains, “Evan had the idea that we could use the same technology to write a new piece — this time, for audience phones and a live string orchestra as well.”

To explain the piece’s title, Ziporyn says, “I drive an EV6; it’s my first electric car, and when I first got it, it felt like I was driving an iPhone. But of course it’s still just a car: it’s got wheels and an engine, and it gets me from one place to another. It seemed like a good metaphor for this piece, in which a lot of the sound is literally played on cellphones, but still has to work like any other piece of music. It’s also a bit of an homage to David Bowie’s song ‘TVC 15,’ which is about falling in love with a robot.”

Egozy adds, “We wanted audience members to feel what it is like to play together in an orchestra. Through this technology, each audience member becomes a part of an orchestral section (winds, brass, strings, etc.). As they play together, they can hear their whole section playing similar music while also hearing other sections in different parts of the hall play different music. This allows an audience to feel a responsibility to their section, hear how music can move between different sections of an orchestra, and experience the thrill of live performance. In ‘EV6,’ this experience was even more electrifying because everyone in the audience got to play with a live string orchestra — perhaps for the first time in recorded history.”

After the concert, guests were treated to six music technology demonstrations that showcased the research of undergraduate and graduate students from both the MIT Music program and the MIT Media Lab. These included a gamified interface for harnessing just intonation systems (Antonis Christou); insights from a human-AI co-created concert (Lancelot Blanchard and Perry Naseck); a system for analyzing piano playing data across campus (Ayyub Abdulrezak ’24, MEng ’25); capturing music features from audio using latent frequency-masked autoencoders (Mason Wang); a device that turns any surface into a drum machine (Matthew Caren ’25); and a play-along interface for learning traditional Senegalese rhythms (Mariano Salcedo ’25). This last example led to the creation of Senegroove, a drumming-based application specifically designed for an upcoming edX online course taught by ethnomusicologist and MIT associate professor in music Patricia Tang, and world-renowned Senegalese drummer and MIT lecturer in music Lamine Touré, who provided performance videos of the foundational rhythms used in the system.

Ultimately, Egozy muses, “'FUTURE PHASES' showed how having the right space — in this case, the new Edward and Joyce Linde Music Building — really can be a driving force for new ways of thinking, new projects, and new ways of collaborating. My hope is that everyone in the MIT community, the Boston area, and beyond soon discovers what a truly amazing place and space we have built, and are still building here, for music and music technology at MIT.”

© Photo: Jonathan Sachs

A Far Cry performs “EV6” as part of the “FUTURE PHASES” concert at MIT.

Getting to the root of teen distracted driving

Health

Getting to the root of teen distracted driving

person looking at their phone while driving

Anna Lamb

Harvard Staff Writer

3 min read

7 in 10 young people use cellphones while behind the wheel, finds a new study that also takes a look at why

Every year, hundreds of people die in automobile accidents involving distracted teen drivers. A new study zeroes in on one of the most common forms of distraction, cellphone use, exploring how often young people engage in the risky behavior and why.

A team of public health researchers led by Rebecca Robbins, Assistant Professor at Harvard Medical School and a scientist at Brigham and Women’s Hospital, surveyed teens across the country to find out the ways in which they use their phones while driving and how that behavior might be curbed.

They found that seven in 10 high school students reported using or making long glances toward their phones while driving — many lasting two seconds or longer — for about 20 percent of each trip.

“That’s a huge proportion — putting themselves and the traveling public around them at risk,” said Robbins.

The time that it would take to read or send a text message, activate maps, or check social media, she added, is associated with a 5.5 times greater likelihood of a crash.

Most teens in the study said they believed their peers engaged in distracted driving. Robbins said teens have a strong association between their beliefs about what their peers are doing and their own actual behavior. So many think it’s normal to check their phones while driving, despite the risks.

“Young people harbor beliefs that looking at their phone offers benefits.”

Rebecca Robbins

“Young people harbor beliefs that looking at their phone offers benefits,” she said. “It allows them to be entertained. It allows them to get where they’re going. That is what we call a maladaptive belief that would need to be corrected with behavioral intervention.”

Among participants who reported using their phones while driving, the most common reasons were entertainment (65 percent), followed by texting (40 percent) and navigation (30 percent).

Among participants who reported using their phones while driving, the most common reasons were…

entertainment

65%

texting

40%

navigation

30%

Yet Robbins emphasized three in 10 respondents reported practicing focused driving.

“Young people had bright spots around them, of role models that were practicing safe driving practices such as avoiding phone use while driving, that was inversely associated with reports of young people distracted-driving themselves,” she said.

Additionally, Robbins said, teens’ attitudes toward their own ability to make educated choices played a role.

“We also found a significant association between self-efficacy and distracted driving, such that stronger self-efficacy beliefs or beliefs that they could avoid distracted driving, avoid the temptation, put their phone in the backseat, turn on ‘Do Not Disturb’ mode, any number of those in the constellation of safe driving practices, was inversely associated with distracted driving,” she said. 

Robbins said information gleaned through the study could be used to craft public health messaging campaigns and behavioral interventions like those that have promoted seat belt use. “This research suggested a number of promising avenues for future research, such as a campaign that would emphasize the benefits of using ‘Do Not Disturb’ mode and empowering young people to turn that mode on, or have it automatically turn on, while they’re driving.”

❌