Chronic wounds are a major medical challenge, burdening healthcare systems with billions of dollars in costs every year. Pioneer Fellow Börte Emiroglu is developing a new product: a selective, sponge-like hydrogel that reduces inflammatory signals and actively promotes healing.
Chronic wounds are a major medical challenge, burdening healthcare systems with billions of dollars in costs every year. Pioneer Fellow Börte Emiroglu is developing a new product: a selective, sponge-like hydrogel that reduces inflammatory signals and actively promotes healing.
Clean, safe water is vital for human health and well-being. It also plays a critical role in our food security, supports high-tech industries, and enables sustainable urbanisation. However, detecting contamination quickly and accurately remains a major challenge in many parts of the world. A groundbreaking new device developed by researchers at the National University of Singapore (NUS) has the potential to significantly advance water quality monitoring and management.Taking inspiration from the
Clean, safe water is vital for human health and well-being. It also plays a critical role in our food security, supports high-tech industries, and enables sustainable urbanisation. However, detecting contamination quickly and accurately remains a major challenge in many parts of the world.A groundbreaking new device developed by researchers at the National University of Singapore (NUS) has the potential to significantly advance water quality monitoring and management.
Taking inspiration from the biological function of the oily protective layer found on human skin, a team of researchers led by Associate Professor Benjamin Tee from the Department of Materials Science and Engineering in the College of Design and Engineering at NUS translated this concept into a versatile material, named ReSURF, capable of spontaneously forming a water-repellent interface. This new material, which can be prepared through a rapid micro-phase separation approach, autonomously self-heals and can be recycled. The researchers incorporated the material into a device known as a triboelectric nanogenerator (TENG), which uses the energy from the movement of water droplets to create an electric charge. The resulting device (ReSURF sensor) can be applied as a water quality monitor.
“The ReSURF sensor can detect various pollutants, such as oils and fluorinated compounds, which are challenging for many existing sensors. This capability, together with unique features such as self-powered, self-healing, reusability and recyclability, positions ReSURF as a sustainable solution for real-time, on-site, and sustainable water quality monitoring,” said Assoc Prof Tee.
The team’s design of the ReSURF material and performance of the novel water quality sensor were published in the scientific journal Nature Communications on 1 July 2025.
Rapid and sustainable water quality sensing
Existing water quality monitoring technologies such as electrochemical sensors, optical detection systems, and biosensors are effective in certain specific applications, such as detecting heavy metals, phosphorus, and microbial pollution.
However, these technologies often face limitations including slow response, high costs, reliance on external reagents or power sources, limited reusability, and the need for bulky laboratory equipment or specialised instrumentation.
The ReSURF sensor developed by the NUS team effectively overcomes these challenges, particularly in on-site real-time water quality sensing. The self-powered device has demonstrated the ability to detect water contaminants in approximately 6 milliseconds (i.e. around 40 times faster than a blink of the eye).
Additionally, the ReSURF sensor is designed to be self-healing and recyclable, making it a sustainable and low-maintenance solution. Being stretchable and transparent, the material can be easily integrated into flexible platforms, including soft robotics and wearable electronics, setting it apart from conventional sensing materials.
Furthermore, the ReSURF material applied as a sensor offers an environmentally friendly solution as it can be easily recycled due to its solubility in solvents, enabling it to be reused in new devices without suffering a loss in performance.
ReSURF sensor: How it works
The ReSURF sensor monitors water quality by analysing the electrical signals generated when analytes — such as salts, oils, or pollutants — in the water droplets, contact its surface. When water droplets containing analytes strike the water-repellent surface of the sensor, they spread out and slide off quickly, generating electric charges within milliseconds. The magnitude and characteristics of the signal generated would vary according to the composition and concentration of the analytes present. By monitoring these signals in real time, the ReSURF sensor can rapidly and accurately assess water quality without the need for external power sources.
To demonstrate its capabilities, the researchers tested the ReSURF sensor on a pufferfish-like soft robot in detecting oil in water and perfluorooctanoic acid – a common contaminant found in water sources. The test produced promising results with both contaminants producing different voltage signals, providing a proof-of-concept that the ReSURF sensor can be used in early surveillance of possible contamination.
Safeguarding water quality
The ReSURF sensor offers broad application potential. It can be deployed in rivers, lakes, and reservoirs to enable early surveillance of pollutants, allowing for quick response to water contamination emergencies. In agriculture, it is capable of monitoring water safety in areas like rice fields. In industrial settings and sewage treatment plants, the ReSURF sensor could provide valuable insights for wastewater management.
Next steps
The research team hopes to optimise the ReSURF sensor by enhancing the specificity of pollutant detection, integrating wireless data transmission capabilities, and scaling the system for long-term or large-scale environmental monitoring. Additionally, the researchers plan to explore more eco-friendly material alternatives to enhance sustainability and align with evolving environmental regulations.
“Future iterations could integrate additional sensing modalities or machine learning–based signal analysis to enable more precise identification and classification of pollutants. We envision this platform as a foundation for the development of more intelligent and responsive water quality monitoring systems,” said Assoc Prof Tee.
As countries race towards achieving net-zero emissions through renewable energy adoption, research plays a pivotal role in shaping how we harness these greener energy sources to power our cities, move people, and manage resources. Professor Dipti Srinivasan from the Department of Electrical and Computer Engineering at the College of Design and Engineering at NUS is combining her passion for artificial intelligence (AI) with a deep commitment to sustainability by developing smart tools that make
As countries race towards achieving net-zero emissions through renewable energy adoption, research plays a pivotal role in shaping how we harness these greener energy sources to power our cities, move people, and manage resources. Professor Dipti Srinivasan from the Department of Electrical and Computer Engineering at the College of Design and Engineering at NUS is combining her passion for artificial intelligence (AI) with a deep commitment to sustainability by developing smart tools that make clean technologies -- like electric buses and renewable energy -- not just viable, but efficient and scalable.
Her journey began with a simple yet powerful question: How can AI solve real-world energy problems? Over time, this curiosity evolved into a focused mission — to help society reduce its reliance on fossil fuels by making renewable energy sources, such as solar and wind, more reliable and accessible.
“I wanted to find smart, data-driven ways to help integrate renewable energy sources better into our power systems and support a cleaner, more sustainable future,” Prof Srinivasan explained.
A data-driven vision for greener cities
Prof Srinivasan’s current research investigates how computational intelligence — drawing on nature-inspired methods like neural networks and evolutionary algorithms — can optimise renewable energy integration and electrified transport systems.
Computational tools are particularly useful in harnessing complex systems, such as city-wide electric bus networks or national power grids, to provide insights for planning and balancing supply and demand, as well as supporting decision-making under constraints such as battery capacity or power grid limits.
Prof Srinivasan and her team leverage on evolutionary computation, which mimics natural selection, to find different solutions by keeping the best-performing ones and improving them over time — just like how nature evolves stronger species. The research team applies this technique to determine the best locations and sizes for battery storage, so that energy is stored and delivered efficiently across the power grid.
Smarter charging, smarter fleets
In a study last year, Prof Srinivasan and Dr Can Bark Saner, who is a research fellow from the Department of Mathematics at the NUS Faculty of Science, introduced a multi-module optimsation framework for the planning and operation of electric bus (e-bus) shuttle fleets to reduce life cycle cost, and maximise savings on charger procurement, electricity, and battery degradation.
As part of the framework, the NUS team proposed a three-module model comprising:
· a vehicle scheduling module to determine e-bus deployment and trip assignments to ensure alignment with energy consumption and mitigate battery degradation;
· a charger deployment and charging planning module that determines the number of chargers to deploy at depots and across e-bus charging schedules to minimise life cycle costs; and
· an online charging scheduling module which updates charging schedules to handle uncertainties in trip energy consumption.
Her team’s work complements the focus on computational intelligence-based decision-making — especially in the context of large-scale electrical vehicle (EV) charging and integration with renewable power. With this proposed framework, they demonstrated a life cycle cost reduction of up to 38.2 per cent, facilitating up to a 90.2 per cent decrease in battery degradation cost.
“We’re working on how to manage EV charging at scale, especially for large fleets in cities, workplaces, or public charging hubs. The goal is to maximise the use of solar and wind power during EV charging, by aligning charging schedules with periods of high renewable energy generation. That way, we make the most of renewable energy and reduce stress on the grid,” said Prof Srinivasan.
The team is also developing algorithms to support Vehicle-to-Grid (V2G) technologies, allowing EVs not just to consume power, but to return it to the electrical grid when needed — turning EVs into mobile storage units that help stabilise the power system.
Beyond technologies -- towards consumer adoption
Integrating EVs and renewable energy into existing infrastructure is not just a technical challenge, it involves various stakeholders from industry partners to consumers. Prof Srinivasan stresses the importance of looking beyond infrastructure. For clean technologies to succeed, people need to understand, trust, and feel supported in adopting them.
“We must think about affordability, ease of use, and awareness. People need clear information, strong incentives, and policies that support their choices,” said Prof Srinivasan.
She added, “People need access to clear information, financial incentives, and reliable technology that fits seamlessly into their lives. Supportive policies and a strong focus on consumer behaviour and acceptance also play a key role in driving the transition to clean energy.”
Envisioning Singapore’s renewable energy future
Looking towards a sustainable future, Prof Srinivasan sees enormous potential in Singapore’s approach to energy innovation.
She envisions a future where renewable energy plays a central role in Singapore’s power system — enabled by smart tools, supported by strong policy, and integrated into everyday life. Prof Srinivasan highlighted that with land constraints, breakthroughs are needed in solar deployment, energy storage, and grid management.
At the heart of her work is a belief that technology, when designed thoughtfully and deployed strategically, can drive real change for a greener and more sustainable future with renewable energy.
“This work isn’t just about algorithms or software. It’s about building systems that support a cleaner, more resilient future, and making sure that the shift to renewables and electric mobility is not just possible, but practical,” said Prof Srinivasan.
As cities and countries plan for more e-buses, greener grids, and sustainable transport systems, Prof Srinivasan’s research offers a critical piece of the puzzle — ensuring we don’t just adopt clean technology, but do so intelligently, affordably, and equitably.
By Adjunct Assoc Prof Jeremy Lim from the Saw Swee Hock School of Public Health at NUS; Dr Taufeeq Wahab from NUHS; and Ms Sheryl Ha, an incoming student at Duke-NUS Medical SchoolCNA Online, 28 June 2025
By Adjunct Assoc Prof Jeremy Lim from the Saw Swee Hock School of Public Health at NUS; Dr Taufeeq Wahab from NUHS; and Ms Sheryl Ha, an incoming student at Duke-NUS Medical School
CNA938, 26 June 2025Oli 96.8FM, 26 June 2025CNA Online, 26 June 20258world Online, 26 June 2025Suria News Online, 26 June 2025The Straits Times, 27 June 2025, The Big Story, Front Page & pA2Lianhe Zaobao, 27 June 2025, Singapore, p7Tamil Murasu, 27 June 2025, Front Page
Scientists at the National University of Singapore (NUS) have demonstrated a perovskite–organic tandem solar cell with a certified world-record power conversion efficiency of 26.4 per cent over a 1 cm2 active area — making it the highest-performing device of its kind to date. This milestone is driven by a newly designed narrow-bandgap organic absorber that significantly enhances near-infrared (NIR) photon harvesting, a long-standing bottleneck in thin-film tandem solar cells.This latest research
Scientists at the National University of Singapore (NUS) have demonstrated a perovskite–organic tandem solar cell with a certified world-record power conversion efficiency of 26.4 per cent over a 1 cm2 active area — making it the highest-performing device of its kind to date. This milestone is driven by a newly designed narrow-bandgap organic absorber that significantly enhances near-infrared (NIR) photon harvesting, a long-standing bottleneck in thin-film tandem solar cells.
The NUS research team published their groundbreaking work in the prestigious scientific journal Nature on 25 June 2025.
Unlocking the promise of tandem solar cells
Perovskite and organic semiconductors both offer widely tunable bandgaps, enabling tandem cells to approach very high theoretical efficiencies. “Thanks to their light weight and flexible form factor, perovskite–organic tandem solar cells are ideally suited to power applications that are run directly on devices such as drones, wearable electronics, smart fabrics and other AI-enabled devices,” said Asst Prof Hou.
However, the absence of efficient NIR thin-film absorbers – which help to capture sunlight in the NIR region more efficiently and hence improving the overall efficiency of tandem cells - has kept perovskite–organic tandem cells lagging behind alternative designs.
Harnessing the near-infrared
To overcome this challenge, Asst Prof Hou and his team developed an asymmetric organic acceptor with an extended conjugation structure, enabling absorption deep into the NIR region while maintaining a sufficient driving force for efficient charge separation and promoting ordered molecular packing. Ultrafast spectroscopy and device physics analyses confirmed that this design achieves high free charge carrier collection with minimal energy loss.
Building on the organic subcell’s performance, the researchers stacked it beneath a high-efficiency perovskite top cell, interfacing the two layers with a transparent conducting oxide (TCO)-based interconnector.
The newly designed tandem cell achieved a power conversion efficiency of 27.5 per cent on 0.05-cm2 samples and 26.7 per cent on 1-cm2 devices, with the 26.4 per cent result independently certified. These findings mark the highest certified performance to date among perovskite–organic, perovskite–CIGS, and single-junction perovskite cells at comparable size.
“With efficiencies poised to exceed 30 per cent, these flexible films are ideal for roll-to-roll production and seamless integration onto curved or fabric substrates — think self-powered health patches that harvest sunlight to run onboard sensors, or smart textiles that monitor biometrics without the need for bulky batteries,” noted Asst Prof Hou.
Next step
In the next phase of their research, the NUS team will focus on enhancing real-world operational stability and advancing towards pilot-line manufacturing - crucial steps in bringing flexible, high-performance solar technology to market.
The National University of Singapore (NUS) has appointed Professor Tulika Mitra as the new Dean for the NUS School of Computing (NUS Computing), and Associate Professor Leong Ching as the Acting Dean for the Lee Kuan Yew School of Public Policy (LKYSPP). Both appointments will take effect from 1 July 2025. Assoc Prof Leong will hold the Acting Dean appointment concurrently with her role as Vice Provost (Student Life), while a Dean search is underway.They succeed current Deans Professor Tan Kian
The National University of Singapore (NUS) has appointed Professor Tulika Mitra as the new Dean for the NUS School of Computing (NUS Computing), and Associate Professor Leong Ching as the Acting Dean for the Lee Kuan Yew School of Public Policy (LKYSPP). Both appointments will take effect from 1 July 2025. Assoc Prof Leong will hold the Acting Dean appointment concurrently with her role as Vice Provost (Student Life), while a Dean search is underway.
They succeed current Deans Professor Tan Kian Lee and Professor Danny Quah respectively, who are returning to their academic pursuits at NUS in research and teaching.
NUS President Professor Tan Eng Chye, said, “We are pleased to appoint Prof Tulika Mitra as Dean of the School of Computing and Assoc Prof Leong Ching as Acting Dean of the Lee Kuan Yew School of Public Policy while a search for the Dean is being carried out. With their extensive academic and industry experience, I am confident that their leadership will propel NUS Computing and LKYSPP towards higher levels of excellence, enabling us to be at the forefront of teaching, research and innovation while nurturing future-ready students with deep intellectual rigour and the resilience and adaptability to thrive in today’s digital economy.”
School of Computing
Prof Mitra has served as Vice Provost (Academic Affairs) since January 2021 and as the Chair of the University Promotion and Tenure Committee (UPTC) since May 2020. From fostering a more supportive and robust promotion and tenure culture to introducing the induction programmes for new faculty, she has been instrumental in identifying and recruiting top academic talents, upholding academic excellence,strengthening the Singaporean academic pipeline. She has created a direct pathway for Full Professorship through impactful educational leadership and incorporated clear guidelines for practice track promotions. Her contributions were recognised with the Singapore Public Administration Medal (Silver) in 2024.
Prof Mitra is a leading expert in hardware-software codesign of computing systems, specialising in real-time embedded systems and energy-efficient AI accelerators. Currently Provost’s Chair Professor in the Department of Computer Science, she has been with NUS Computing since 2001. She has served as the Editor-in-Chief of ACM Transactions on Embedded Computing Systems, Member of the ACM Publications Board, and General/Program Chair of many conferences. Additionally, she serves on the DSTA Board of Directors, Scientific Advisory Board of MPI-SWS, Barkhausen Institute, and international expert panels of the Chinese University of Hong Kong, INRIA France, and KTH Sweden.
On taking up her new appointment, Prof Mitra said, “I am honoured to lead the School at a time when our discipline is central to interdisciplinary innovations reshaping the modern world. I look forward to returning to my roots and working closely with the NUS Computing family of exceptional colleagues and students.”
Prof Mitra will concurrently take on the role of Vice Provost (Special Projects) in addition to helming the School of Computing. She will support the Deputy President (Academic Affairs) and Provost in leading high-impact strategic initiatives for the University.
Lee Kuan Yew School of Public Policy
Associate Professor Leong Ching joined LKYSPP in 2014 and was appointed to her current position in 2019.
An economist with a focus on applying institutional theory to the policy sciences, Assoc Prof Leong is today among the leading scholars in the field of behavioural public policy (BPP), with visiting appointments at the London School of Economics and Cambridge University.
She uses large field experiments to understand the motivating forces of government and public behaviour where her work has had significant impact on water policy and sustainability issues, as well as on public willingness to accept novelty such as recycled drinking water as a solution to global water scarcity. In the recent COVID-19 pandemic, her work on the willingness to accept new science in the form of mRNA vaccines has been cited by the World Health Organisation (WHO) as an important behavioural intervention to reduce hesitancy.
Beyond her contributions to public policy research and academic excellence, Assoc Prof Leong is committed to student growth and development. In her current role as Vice Provost (Student Life), she has overseen the integration of student life into the NUS curriculum.
In her new role as Acting Dean at LKYSPP, Assoc Prof Leong will build on LKYSPP’s strengths and track record in research, education and engagement, while fostering partnerships with government, industry and society. She will be supported by the current members of the LKYSPP Deanery who will continue to contribute towards leadership for the School.
Assoc Prof Leong said, “The Lee Kuan Yew School of Public Policy is built on a belief in the transformational power of policy ideas. I look forward to working with my colleagues to continue producing these global public goods so as to inform government and public decisions on the most pressing problems of our time.”
New Vice Provost (Academic Affairs)
Professor Ho Ghim Wei, who has been serving as Associate Provost (Academic Affairs), will succeed Prof Mitra as Vice Provost (Academic Affairs) from 1 July 2025. In her new role, she will lead the University’s efforts to nurture, develop and empower faculty members towards excellence in education, research and innovation.
At the Office of the Provost, Prof Ho has been supporting the Vice Provost in overseeing the academic review process and upholding standards of excellence in faculty career progression. She has been playing key roles in organising faculty development workshops and events, as well as in outreach and recruitment efforts.
Prof Ho is an outstanding scholar from the College of Design and Engineering (CDE), where she leads the Sustainable Smart Solar Systems research group. Her team conducts fundamental and applied research on nanosystems based on emerging low-dimensional nanomaterials, interfacial interactions and hybridised functionalities for applications in energy, the environment, electronics and healthcare. She also served as Vice Dean for Student Life at CDE for over four years.
In appreciation
Expressing his gratitude to the two outgoing deans for their service, President Tan said, “I am deeply grateful to Kian Lee and Danny for their leadership and service. A homegrown talent, Kian Lee is an outstanding researcher, data scientist and educator who has steered the faculty from a small department in its early years into one of the world’s top and highly competitive computing schools. Danny is an eminent economist who has enabled LKYSPP to strengthen and anchor its position as a global thought leader, advancing impactful policy solutions and training policy makers for Singapore, the region, and beyond.”
“As they return to academia, I look forward to their continued contributions to NUS in inspiring future generations of researchers and students to shape the future of technology and serve for the greater good of society,” President Tan added.
Under Prof Tan’s leadership, NUS Computing has flourished, consistently drawing in and nurturing the best and brightest talents. In tandem with the growing prominence and impact of Artificial Intelligence (AI), the School has expanded its curriculum offerings with the launch of three new AI-centric degree programmes, and the opening of Sea Building and Sea Connect – featuring new collaborative spaces for teaching and innovation, and home to 12 research labs which will catalyse long-term, fruitful collaborations between academia and industry.
These efforts are vital in driving NUS’ bold ambitions in cutting-edge research, education, and collaboration, advancing the rapidly evolving fields in computing such as AI and data science.
A distinguished global scholar, Prof Quah returned from London in 2016 and joined LKYSPP to pursue research in and contribute to policy development on international economic relations, income inequality, and economic growth. Over the past seven years as Dean of LKYSPP, he has been a visionary and transformative leader.
Under his stewardship, the School has strengthened its position as a leader and authority in public policy research and education, with distinct focus on Asia’s unique challenges and opportunities. Among the significant initiatives that expanded the School’s reach and impact are enhanced leadership training programmes; the Global-is-Asian platform, advancing research and collaboration across the Asia-Pacific and beyond; and the biennial Festival of Ideas, a flagship forum for policy dialogue bringing together experts and opinion leaders to address the most pressing issues of our time.
A prolific writer and sought-after speaker, Prof Quah has worked to push the frontiers of research in his field and, at leading international forums, drawn on that academic research to provide thought leadership in economic policy, global governance, and Asia’s role in international affairs.
By Mr Tan Kway Guan, Research Associate and Principal Project Manager and Dr Yi Xin, Research Fellow, both from the Asia Competitiveness Institute, Lee Kuan Yew School of Public Policy at NUSThe Business Times, 19 June 2025, p16
By Mr Tan Kway Guan, Research Associate and Principal Project Manager and Dr Yi Xin, Research Fellow, both from the Asia Competitiveness Institute, Lee Kuan Yew School of Public Policy at NUS
The Straits Times, 19 June 2025, The Big Story, pA6The New Paper, 19 June 2025Tamil Murasu, 19 June 2025, p2CNA (TV News), 19 June 2025Channel 8, 19 June 2025CNA938, 19 June 2025Money 89.3FM, 19 June 2025Hao 96.3FM, 19 June 2025Warna 94.2FM, 19 June 2025Suria News Online, 19 June 2025Vasantham News Online, 19 June 2025Lianhe Zaobao, 20 June 2025, Opinion, p3
Science & Tech
Mounting case against notion that boys are born better at math
Elizabeth Spelke studies French testing data, finds no gender gap until instruction begins
Christy DeSmith
Harvard Staff Writer
July 3, 2025
6 min read
Elizabeth Spelke. Stephanie Mitchell/Harvard Staff Photographer
Twenty years ago, cognitive psychologist Elizabeth Spelke took a strong position in an ongoin
Mounting case against notion that boys are born better at math
Elizabeth Spelke studies French testing data, finds no gender gap until instruction begins
Christy DeSmith
Harvard Staff Writer
6 min read
Elizabeth Spelke.
Stephanie Mitchell/Harvard Staff Photographer
Twenty years ago, cognitive psychologist Elizabeth Spelke took a strong position in an ongoing public debate.
“There are no differences in overall intrinsic aptitude for science and mathematics among women and men,” the researcher declared.
A new paper in the journal Nature, written by Spelke and a team of European researchers, provides what she called “an even stronger basis for that argument.”
A French government testing initiative launched in 2018 provided data on the math skills of more than 2.5 million schoolchildren over five years. Analyses showed virtually no gender differences at the start of first grade, when students begin formal math education. However, a gap favoring boys opened after just four months — and kept growing through higher grades.
The results support previous research findings based on far smaller sample sizes in the U.S. “The headline conclusion is that the gender gap emerges when systematic instruction in mathematics begins,” summarized Spelke, the Marshall L. Berkman Professor of Psychology.
Back in 2005, her position was informed by decades of work studying sensitivity to numbers and geometry in the youngest members of human society.
“My argument was, ‘OK, if there really were biological differences, maybe we would see them in the infancy period,’” recalled Spelke, who laid out her evidence in a critical review for the journal American Psychologist that year.
“We were always reporting on the gender composition of our studies, as well as the relative performance of boys and girls,” Spelke continued. “But we were never finding any differences favoring either gender over the other.”
“The fact that there are no differences in infants could be because the abilities that show gender effects actually emerge during preschool.”
The possibility remained that differences in skill or even motivation surface later in the lifecycle.
“The fact that there are no differences in infants could be because the abilities that show gender effects actually emerge during preschool,” Spelke said.
Recent years have found the psychologist applying her research on early counting and numeral-recognition skills via educational interventions, all analyzed and refined through randomized control experiments.
One of the world’s most influential researchers on early learning, Spelke recently partnered with Esther Duflo, an MIT economics professor and Nobel laureate, to advise the Delhi office of the nonprofit Abdul Latif Jameel Poverty Action Lab (J-PAL). The group is working with the governments of four separate Indian states to develop and test math curricula for preschoolers, kindergartners, and first-graders.
Alongside her longtime collaborator, the cognitive neuroscientist Stanislas Dehaene, Spelke also serves as an adviser on the French Ministry of Education’s Scientific Council. The nationwide EvalAide language and math assessment was introduced with the council’s help in 2018. The project’s goal, Spelke explained, is establishing a baseline measure of every French child’s grasp of basic numeracy and literacy skills, while supporting the ministry in its commitment to implementing an evidence-based education for all French schoolchildren.
Spelke co-authored the Nature paper with Dehaene and eight other researchers, all based in France. Specifically analyzed were four consecutive cohorts of mostly 5- and 6-year-olds entering school between 2018 and 2021.
As in many countries, French girls tested slightly ahead of French boys on language as they started first grade in the fall. But the gender gap was close to null when it came to math.
“That definitely connects to the earlier issue of whether there’s a biological basis for these differences,” Spelke argued.
French first-graders were then reassessed after four months of school, when a small but significant math gap had emerged favoring boys. The effect quadrupled by the beginning of second grade, when schoolchildren were tested yet again.
“It was even bigger in fourth grade,” said Spelke, noting that French children are now assessed at the start of even-number grades. “And in sixth grade it was bigger still.”
For comparison, EvalAide results show the literacy gender gap was reduced by the first year’s four-month mark and changed far less as students progressed to higher grade levels.
Why would a gender gap widen on math specifically as students accumulated more time in school? According to Spelke, the paper provides “only negative answers” concerning ideas about innate sex differences and social bias.
“If there was really a pervasive social bias, and the parents were susceptible to it,” she said, “we would expect boys to be more oriented toward spatial and numerical tasks when they first got to school.”
Delving further into the data yielded more results that caught the researchers’ interest. For starters, Spelke’s co-authors could disaggregate the findings by month of birth, with the oldest French first-graders turning 7 in January — nearly a year before their youngest classmates. The math gap was found to correlate not with age, but with the number of months spent in school.
Another noteworthy result concerned the COVID-19 pandemic, which wiped out the last 2.5 months of first grade for children who enrolled in fall 2019. “With less time in school, the amount of the gender gap grew by less than it did in the other years where there wasn’t a long school closure,” Spelke said.
The 2019 cohort yielded one more striking result. Earlier that year, French schoolkids had placed at the very bottom of 23 European countries on the quadrennial Trends in International Mathematics and Science Study. That sparked a national conversation: How could France, birthplace of the great René Descartes, be trailing its peers in mathematics?
In May 2019, the French Education Ministry, with the support of its Scientific Council, called for the introduction of more math curriculum during kindergarten. For the first time, an ever-so-slight gender math gap appeared that fall for those entering first grade. It hadn’t been there in 2018 but remained detectable in results from the 2020 and 2021 cohorts.
The overall results, the most conclusive to date, suggest it’s time to shelve explanations based on biology or bias. Instead, it appears there’s something about early math instruction that produces gender disparities.
“We still don’t know what that is exactly,” said Spelke, who plans to spend much of her 2025-26 sabbatical year in France. “But now we have a chance to find out by randomized evaluations of changes to the curriculum.”
Health
Forecasting the next variant
Professor Eugene Shakhnovich (from left), Dianzhuo (John) Wang, and Vaibhav Mohanty worked together on the studies.Veasey Conway/Harvard Staff Photographer
Yahya Chaudhry
Harvard Correspondent
July 3, 2025
5 min read
Harvard team fuses biophysics and AI to predict viral threats
When the first reports of a new COVID-19 variant emerge, scientists worldwi
Professor Eugene Shakhnovich (from left), Dianzhuo (John) Wang, and Vaibhav Mohanty worked together on the studies.
Veasey Conway/Harvard Staff Photographer
Yahya Chaudhry
Harvard Correspondent
5 min read
Harvard team fuses biophysics and AI to predict viral threats
When the first reports of a new COVID-19 variant emerge, scientists worldwide scramble to answer a critical question: Will this new strain be more contagious or more severe than its predecessors? By the time answers arrive, it’s frequently too late to inform immediate public policy decisions or adjust vaccine strategies, costing public health officials valuable time, effort, and resources.
In a pair of recent publications in Proceedings of the National Academy of Sciences (PNAS), a research team in the Department of Chemistry and Chemical Biology combined biophysics with artificial intelligence to identify high-risk viral variants in record time — offering a transformative approach for handling pandemics. Their goal: to get ahead of a virus by forecasting its evolutionary leaps before it threatens public health.
“As a society, we are often very unprepared for the emergence of new viruses and pandemics, so our lab has been working on ways to be more proactive,” said senior author Eugene Shakhnovich, Roy G. Gordon Professor of Chemistry. “We used fundamental principles of physics and chemistry to develop a multiscale model to predict the course of evolution of a particular variant and to predict which variants will become dominant in populations.”
The studies detail approaches for forecasting the viral variants most likely to become public health risks and for accelerating experimental validation. Together, these advances reshape both the prediction and detection of dangerous viral variants, setting a template for broader applications.
These studies were led by members of Shakhnovich’s lab, including co-authors Dianzhuo (John) Wang and Vaibhav Mohanty, both Ph.D. students in the Harvard Kenneth C. Griffin Graduate School of Arts and Sciences, and Marian Huot, a visiting student from École Normale Supérieure.
“This framework doesn’t just help us track variants — it helps us get ahead of them.”
Marian Huot, visiting student and co-author
“Our work has focused on the spike protein of COVID-19, analyzing how its mutations change viral fitness and immune evasion,” said Wang. “Given that COVID-19 is the most extensively documented pandemic to date, we saw an opportunity to develop models that not only understand viral evolution, but also anticipate which mutations are likely to pose the greatest threat.”
The first study introduced a model that quantitatively linked biophysical features — such as the spike protein’s binding affinity to human receptors and its ability to evade antibodies — to a variant’s likelihood of surging in global populations. By incorporating a complex, yet essential factor called epistasis (where the effect of one mutation hinges on another), the model overcame a key limitation of previous approaches that struggle to make accurate predictions.
“Evolution isn’t linear — mutations interact, sometimes unlocking new pathways for adaptation,” Shakhnovich said. “Factoring these relationships allowed us to forecast the emergence of dominant variants ahead of epidemiological signals.”
Building on these insights, the companion study introduces VIRAL (Viral Identification via Rapid Active Learning), a computational framework that combines the biophysical model with artificial intelligence to accelerate the detection of high-risk SARS-CoV-2 variants. By analyzing potential spike protein mutations, it identified those likeliest to enhance transmissibility and immune escape.
“At the start of a pandemic, when experimental resources are scarce, we can’t afford to test every possible mutation,” Wang said. “VIRAL uses artificial intelligence to focus lab efforts on the most concerning candidates — dramatically accelerating our ability to identify the variants that could drive the next wave.”
The implications of this research are far-reaching. Simulations show that the VIRAL framework can identify high-risk SARS-CoV-2 variants up to five times faster than conventional approaches, while requiring less than 1 percent of experimental screening effort. This dramatic gain in efficiency could significantly accelerate early outbreak response.
“This framework doesn’t just help us track variants — it helps us get ahead of them,” said Huot. “By identifying high-fitness variants before they appear in the population, we can inform vaccine design strategies that anticipate, not just react to, emerging threats.”
A defining feature of this work is its interdisciplinary scope, with the international Harvard team bringing together fields of molecular biophysics, artificial intelligence, and virology to deepen our understanding of rapidly evolving viral threats.
“By uniting physics-driven modeling and machine learning, we’re introducing a predictive framework for viral evolution with broad potential,” Shakhnovich said. “We’re eager to see how this strategy might extend beyond infectious diseases into areas like cancer biology.”
Looking ahead, the team aims to adapt and scale the framework for broader use, targeting challenges such as other emerging viruses and rapidly evolving tumor cells. They emphasize that combining physical modeling with AI could shift the paradigm from reactive tracking to proactive biological forecasting.
“In a world where biological threats are constantly evolving, earlier warning and smarter tools are essential,” Wang said. “Our ultimate goal is to create a platform — one that gives scientists and policymakers a head start not just in future pandemics, but in tackling fast-evolving challenges across biology.” added Huot.
Shakhnovich credited grants from the National Institutes of Health for enabling exploratory research to benefit public health. Basic science and future breakthroughs are in grave danger due to Washington’s cuts to scientific research, Shakhnovich warned.
“Our research has the potential to help all of humankind to solve some serious health problems,” Shakhnovich said. “It would not have been possible without federal funding that looks for long-term benefits.”
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplant
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.
The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant.
Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ.
“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and recent MIT PhD graduate Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ allocation process.
For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.
The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”
The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality.
Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups.
Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.
“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ allocation project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race.
While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.
To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”
“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”
This work was supported with funding from the MIT Jameel Clinic. This research was supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.
The first successful organ transplant was less than 75 years ago. Despite significant progress since then, many patients still fall through the gaps of what remains a complicated procedure.
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplant
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.
The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant.
Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ.
“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and recent MIT PhD graduate Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ allocation process.
For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.
The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”
The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality.
Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups.
Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.
“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ allocation project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race.
While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.
To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”
“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”
This work was supported with funding from the MIT Jameel Clinic. This research was supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.
The first successful organ transplant was less than 75 years ago. Despite significant progress since then, many patients still fall through the gaps of what remains a complicated procedure.
High temperatures and more frequent heatwaves are causing many people to doubt whether high-density urban planning is still sustainable. However, building physicist Jan Carmeliet argues that even dense cities can be cool if they are planned correctly.
High temperatures and more frequent heatwaves are causing many people to doubt whether high-density urban planning is still sustainable. However, building physicist Jan Carmeliet argues that even dense cities can be cool if they are planned correctly.
Science & Tech
Highly sensitive science
Veasey Conway/Harvard Staff Photographer
Sy Boles
Harvard Staff Writer
July 2, 2025
6 min read
David Ginty probes pleasure and pain to shed light on autism, other conditions
Part of the
Profiles of Progress
series
The itch of a clothing tag. The seam on the inside of a sock. The tickling of hairs on
The itch of a clothing tag. The seam on the inside of a sock. The tickling of hairs on the back of your neck. For many of us, it’s easy to tune out these sensations as we move through the day. But for some autistic people, everyday sensations can be intolerable.
David Ginty knows why, and it’s not, as many autism researchers once believed, a dysfunction of the brain.
Ginty, the Edward R. and Anne G. Lefler Professor of Neurobiology and chair of the Department of Neurobiology at Harvard Medical School, studies touch and pain. Scientists have known for some time, he said, that our experience of physical sensation is a collaboration between our brain, our central nervous system, and sensory neurons. But the mechanisms behind that collaboration have remained a mystery, and the quest for an answer has major implications for our ability to treat everything from chronic pain to autistic hypersensitivity to sexual dysfunction.
“The auditory system cares about sound waves in a particular frequency range,” Ginty said. “The visual system, similarly, only cares about a narrow band of the visual light range. But the somatosensory system cares about tactile stimuli, thermal stimuli, chemical stimuli, proprioception — where your body and limbs are in space and time, as well as the state of many of our body organs.
“And then there’s an affective component, an emotional component of touch, which in itself is a huge burgeoning area that is interesting. How does touch trigger an emotional response? Somatosensation is incredibly rich and multidimensional.”
“We’re looking hard to find non-opioid approaches to treat pain and we’ve identified many potential approaches.”
About 10 years ago, Ginty and his team found that in animal models of autism spectrum disorder, the locus of sensory dysfunction was not the brain, as had been thought, but the spinal cord and periphery. The key players are second-order neurons in the spinal cord that function like a mixing board’s gain or volume control, amplifying or dampening sensations as they travel from the skin and other sensory organs to the brain. In some ASD models, these second-order neurons appeared to be stuck on high, leading to sensory overload.
“It made us realize that we could potentially treat sensory over-reactivity by turning down the activity of sensory neurons, or sensory neuron responsiveness, in the peripheral nervous system,” he said.
The most logical approach would be to use drugs that turn down sensory neuron activity. Lauren Orefice, then a postdoc in the Ginty lab, thought that benzodiazepines could be used to silence nerve cells in the periphery to help reduce sensory over-reactivity. But pediatricians are reluctant to prescribe potentially addictive sedatives to their patients.
“So one approach that we’ve been trying to take is to develop peripherally restricted benzodiazepines that can reduce the activity of neurons in the peripheral nervous system without penetrating the brain, and therefore without sedating side effects,” Ginty said.
For children with autistic hypersensitivity, such a drug could be life-changing. It could reduce overstimulation, lower anxiety, prevent meltdowns, and let them experience a hug as a pleasure rather than a source of pain.
Pacinian corpuscles — neurons that sense vibrations — are delicate enough to pick up someone’s footsteps on the other side of the room.
Image by Zoe Sarafis
The implications of Ginty’s work on the systems underlying pleasure and pain extend far beyond autism research. The somatosensory system is made of some 20 types of neurons tucked into every imaginable part of the body: the base of our hair follicles, the crevasses of our dermis, in our muscles and joints — anywhere that detects variations of stretch, pressure, vibration, temperature, and even our position in space. If he had to pick a favorite neuron, he’d pick two: Pacinian corpuscles and nociceptors.
Pacinian corpuscles sense vibrations. They’re delicate enough to pick up someone’s footsteps on the other side of the room, and impactful enough to make us cry when music moves through our body. Nociceptors pick up on noxious stimuli — or, in plain English, pain.
“We’re figuring out how nociceptors are connected in the central nervous system to give rise to reflexes, like quickly removing your hand from a hot stove, or to the emotional component of pain,” Ginty said. “These are truly amazing neurons. They have very high thresholds, unlike the Pacinian corpuscles, which respond just to tiny tweaks or vibrations of the skin. The nociceptors only fire an electrical impulse when you have a damaging encounter.”
New genetic tools allow Ginty to understand how nociceptors connect to the central nervous system and identify every protein that nociceptors express, unlocking a new range of potential drug targets. “Right now, opioids are the best remedy we have for many types of pain, and that’s really gotten us into trouble,” he said. “We’re looking hard to find non-opioid approaches to treat pain and we’ve identified many potential approaches by targeting the nociceptors themselves.”
Ginty’s lab is not set up for drug development. But the research done in his lab forms the groundwork that the pharmaceutical industry needs to create treatments that improve lives. Ginty’s research is often exploratory, he said. It’s not always clear whether or how a certain experiment will translate into a therapy or marketable drug, which is why industry funding is rarely sufficient. It’s federal grants that have supported the fundamental science, which, in the long run, lead to cures.
Ginty has had two grants frozen in the Trump administration’s dispute with Harvard. The first, a partnership with Clifford Woolf at Boston Children’s Hospital, was exploring how pain stimuli in the skin, joints, and bone are propagated into the spinal cord and conveyed to the brain, and where in the brain those signals go.
The second was a prestigious R35 grant, sometimes called the Outstanding Investigator Award, which provides flexible, long-term funding to established investigators to allow them to pursue particularly innovative research. It was meant to cover the bulk of Ginty’s work for eight years, but it was eliminated just one year in.
The most devastating part of the cancellations, he said, is that they come at a time of unparalleled progress in neurobiology.
“The advances are just breathtaking because of the alchemy of bringing together genetics and physiology and molecular biology, the knowledge that is being unveiled. At no other time in history have the advances been so rapid and so large as the time we’re in now. I feel fortunate to be in this position and to play a part in discovering how the nervous system works and new therapeutic opportunities. We need to find ways to survive the current funding crisis so that progress that leads to new treatments for disorders of the nervous system can continue.”
AP photos
Health
Was legal pot a good idea?
Researchers detail what we know about impact on revenue and health — and what we still need to find out
Saima Sidik
Harvard Correspondent
July 2, 2025
long read
In Massachusetts, getting stoned gets easier all the time.
Since the Commonwealth legalized recreational cannabis in 2016, dispensaries have proliferated, the price of cannabis has dr
Researchers detail what we know about impact on revenue and health — and what we still need to find out
Saima Sidik
Harvard Correspondent
long read
In Massachusetts, getting stoned gets easier all the time.
Since the Commonwealth legalized recreational cannabis in 2016, dispensaries have proliferated, the price of cannabis has dropped by more than half, and the potency of pot has shot up. All told, cannabis has become big business in Massachusetts, with the industry raking in more than $1.64 billion last year. Other states have seen similar trends.
Some supporters of legalization envisioned a new era of personal freedom, with easy access to a plant they touted as a healthier alternative to alcohol. Tax revenue from cannabis sales would fund valuable state projects, and legalization would alleviate a burden on the justice system, they said.
Almost a decade later, we asked four researchers to weigh in on how those hopes line up against the reality of marijuana legalization. Interviews have been edited for clarity and length.
Harvard file photo
Kevin P. Hill
Associate Professor of Psychiatry at Harvard Medical School Director of Addiction Psychiatry at Beth Israel Deaconess Medical Center
Legalizing marijuana has created a huge new revenue stream for the state — over $920 million according to the Marijuana Policy Project. And I think that’s great. But in my eyes, that revenue has come at a great cost to public health.
My colleagues and I are treating more and more people who have developed cannabis use disorder, which is when cannabis use interferes with key spheres in one’s life such as work, school, or relationships. That’s not surprising when you consider that the number of daily or near-daily cannabis users has increased 20-fold over the last three decades.
On top of that, cannabis is far more potent today than it was in decades past. This trend started before legalization, but when big businesses got involved in cultivation, they had the means to really drive up the THC content — that’s the component of the plant that makes a user feel high. In the 1960s, ’70s, and ’80s, the average THC content of cannabis was around 3 percent or 4 percent. Now you can find cannabis flower that’s 20 percent to 30 percent or even higher.
In the 1960s, ’70s, and ’80s, the average THC content of cannabis was around 3 percent or 4 percent. Now you can find cannabis flower that’s 20 percent to 30 percent.
That jump in potency has led to a significant increase in the number of adult users who develop cannabis use disorder, from 10 percent just 10 years ago to around 30 percent today. It’s hard to say how much of this trend can be attributed to legalization, but I think legalization has probably pushed it along that much faster.
The problem, I think, comes down to the difference between ideas and implementation. We need to couple increasing cannabis use to additional research so that we can mitigate the harms done by the drug while still maintaining the positive aspects of legalization. For example, we need more research on how law enforcement can prevent people from driving under the influence of cannabis, which can lead to dangerous situations. Likewise, we need people to research the medical benefits of cannabis (and yes, in certain circumstances the drug does seem to have bona fide medical benefits). Ideally, we’ll find ways for people to take advantage of those benefits while keeping their risk of developing a substance use disorder — or other problems that might result from cannabis use — low.
I would love to see the states and private companies that are benefiting from cannabis sales put more money toward research so that cannabis science can keep pace with interest in cannabis.
Harvard file photo
Peter Grinspoon
Instructor in Medicine at Harvard Medical School Author of ‘Seeing Through the Smoke: A Cannabis Specialist Untangles the Truth about Marijuana’
I think cannabis legalization has been a tremendous success overall. Certainly there are things that could still be better, but we’ve made great progress in a few key areas.
First, cannabis causes far fewer people to get arrested these days than it did in decades past. Arrests for possession dropped over 70 percent between 2010 and 2018. And that’s great because having an arrest on your record can impact your education, your housing, your employment, everything. It’s awful. Things aren’t perfect; arrests have not gone down to zero, and Black people are still arrested at higher rates than white people. But the drug isn’t clogging up the justice system as much as it used to.
Second, by creating a legal market we’ve made sure people have access to safe cannabis as opposed to an illicit product that may be contaminated with pesticides or mold or heavy metals. Of course, not everybody buys through the legal market, and that’s something we could still work on.
And lastly, we’re generating tax revenue for the state, which is a huge win. At this point, the state is actually generating more tax revenue from cannabis than it is from alcohol.
Nobody credible is arguing for a return to cannabis prohibition, and I think that’s a testament to the overall success we’ve had in legalizing it.
Are there still problems to be solved? Yes, absolutely! One of the biggest problems is that accidental overconsumption is becoming more common. That’s for two reasons. The first is that products are so much stronger than they used to be. People take the same three bong hits they took back in college not realizing that today that’s the equivalent of taking about seven times what they used to.
And second, we’re stupid enough to make cannabis into gummies and chocolates. Kids will eat these if they’re left out in the open, and that can send them to the ER. But it’s not just kids — adults are also prone to overconsumption when cannabis is made to taste good. I’m firmly opposed to turning cannabis into candy — or pizza sauce or hot sauce or any other type of food — and I’ve been blowing this horn for a long time.
Just like criminalizing cannabis, legalizing it is a social experiment. We need to monitor the situation carefully because there could be risks that we haven’t even thought of. But as far as I know, nobody credible is arguing for a return to cannabis prohibition, and I think that’s a testament to the overall success we’ve had in legalizing it.
Harvard Medical School
Michael Flaherty
Assistant Professor of Pediatrics, Harvard Medical School
Pediatric Critical Care Physician, Massachusetts General Hospital
As the director of MGH’s pediatric injury prevention program, my interest in cannabis legalization lies in how it impacts the safety of children. And accidental cannabis exposure can definitely be a threat to child safety.
When my colleagues and I used public health data to study the frequency with which cannabis sends kids to the emergency room, we found that these visits increased by about 60 percent after recreational dispensaries opened. Most of the exposures (over 80 percent) have been in teenagers, but the biggest increases were in younger kids. In the zero-to-5 age group, we saw about a fourfold increase, and in the 6-to-12 age group, a sevenfold increase.
Because they’re small, kids will be more severely affected by cannabis than an adult who takes the same amount. In fact, cannabis can make kids so sleepy they start to have trouble breathing. To complicate matters, the same thing can happen if kids ingest a number of prescription drugs or if they get meningitis or encephalitis. Cannabis consumption is rarely if ever fatal, but those other conditions can definitely be fatal. Unless someone saw the kid eat cannabis, we often don’t know what’s going on — or how bad the situation is — until we test for everything plus the kitchen sink. It puts a lot of strain on the system.
It’s really important that parents keep cannabis well secured, and we also need to put a call out to manufacturers and retailers: Make the packaging child-proof!
Cannabis has also had some positive effects for pediatric medicine. In particular, a component of cannabis called cannabidiol has been quite successful for treating epilepsy in children who don’t respond to other seizure medications. I don’t really have an opinion on whether the pros of cannabis legalization outweigh the cons. But as a pediatrician, it’s my job to advocate for children and to protect them from the unintended consequences of voters’ actions, and right now that means educating people about the dangers of accidental consumption.
Toddlers in their exploratory phase are especially likely to eat cannabis if it’s left lying around. It’s really important that parents keep cannabis well secured so that kids can’t access it, and we also need to put a call out to manufacturers and retailers: Make the packaging child-proof! These products should be a little more difficult for a child as young as 3 or 4 to open.
Harvard Law School
Carmel Shachar
Assistant Clinical Professor at Harvard Law School Faculty Director of the Health Law and Policy Clinic at Harvard Law School
One of the strangest aspects of cannabis legalization is that the drug is only legal in state policy. At a federal level, cannabis is still criminalized. And part of why it’s criminalized is because cannabis is classified as a Schedule I substance, which means it’s addictive and it has no medical use.
At this point it seems clear that cannabis does have medical value, for example for relieving certain types of pain and to reduce nausea during chemotherapy. The Schedule I classification was a response to the cultural perception people had of cannabis during the 1960s, and the designation was made without much scientific evidence to back it up. Unfortunately, with cannabis illegal on a federal level, it’s difficult for scientists to research the plant’s legitimate medical uses because they can’t use federal funds. And because cannabis is a natural product and can’t be patented, private industry isn’t very interested in researching it.
Many people — myself included — hope that cannabis will be reclassified as a Schedule III drug, which would clear some of these roadblocks to research.
Many people — myself included — hope that cannabis will be reclassified as a Schedule III drug, which would clear some of these roadblocks to research. Legalization at the state level sets a precedent for reclassifying cannabis because it shows that even with millions of people now having access, the sky has not fallen. But at this point, we still haven’t achieved the reclassification that we’re hoping for.
During the Biden administration, there was interest in rescheduling cannabis, but the process has a lot of twists and turns, and it wasn’t completed before the new administration took office. Now rescheduling appears to be on pause. The motion is parked in front of a Drug Enforcement Agency administrative law judge who seems to be skeptical of its value.
Campus & Community
When the falcons come home to roost
A nest cam has been installed to livestream a pair of peregrine falcons atop the Memorial Hall tower.Photos by Stephanie Mitchell/Harvard Staff Photographer
Eileen O’Grady
Harvard Staff Writer
July 2, 2025
5 min read
Peregrines have rebounded since DDT era and returned to Memorial Hall. Now new livestream camera offers online visitors fr
A nest cam has been installed to livestream a pair of peregrine falcons atop the Memorial Hall tower.
Photos by Stephanie Mitchell/Harvard Staff Photographer
Eileen O’Grady
Harvard Staff Writer
5 min read
Peregrines have rebounded since DDT era and returned to Memorial Hall. Now new livestream camera offers online visitors front row seat of storied perch.
A new wildlife camera mounted on Memorial Hall is giving online visitors an up-close glimpse of a peregrine falcon nesting site with a storied history.
The FAS installed the Peregrine Falcon Cam this spring on the east side of the tower, facing the rooftop nest box. There have been frequent sightings of two falcons, one male and one female, who appear throughout the day to eat, preen, and rest when they aren’t hunting.
“Buildings are natural canyons for them,” said Brian Farrell, Monique and Philip Lehner Professor for the Study of Latin America, professor of biology, and curator of entomology in the Museum of Comparative Zoology. “They’re like cliffsides, and they have loads of starlings and pigeons around, so plenty of food. They like the high perches because they hunt only birds in flight, and only over open spaces.”
The Memorial Hall site has a long history — late pioneering biologist and Harvard professor emeritus Edward O. Wilson observed peregrine falcons nesting there as a Ph.D. student in 1955. But the U.S. peregrine population was decimated by the pesticide DDT in the mid-20th century, and none of the birds of prey were seen on Harvard’s campus for years.
Ray Traietti, director of administration in the Office for the Arts and former building manager of Memorial Hall, realized the birds had returned one day in 2014. He was walking into work when a severed starling head dropped at his feet.
“I started noticing pieces of dead birds all around. I was like, ‘Oh, this is kind of odd,’” Traietti recalled. Officials from the Massachusetts Division of Fisheries and Wildlife would later discover that the falcon eggs laid on the rubber roof of Memorial Hall weren’t viable that year, likely due to exposure to the elements.
Harvard biology Professor Brian Farrell (left) and Ray Traietti, the former building manager of Memorial Hall.
State officials installed the box the next year to protect future nests. The three-sided design allows the fastest birds on Earth to leave the nest with their signature move: a dive that can reach 200 mph.
“They need open space to launch themselves,” Farrell said. “They don’t flap and go up vertically like birds with broader wings can do. They just take off like fighter jets off an aircraft carrier.”
In the spring of 2021 a pair of falcons successfully hatched and fledged three chicks — the first known to hatch on Memorial Hall since the 1950s.
“I think, to E.O. Wilson’s point, there’s something about that location that works for them,” Traietti said. “To think that after the nationwide decimation of DDT, that they went back to that same spot, is pretty remarkable.”
“The world seems more chaotic every day, but here’s something that’s beautiful and pure and continuing on.”
Brian Farrell
After the federal DDT ban in the 1970s, a reintroduction effort followed, and the falcon population has slowly increased. Previously designated endangered in Massachusetts, they were moved to the less critical “special concern” category in 2019. As of 2020, there are at least 46 nesting pairs in the state.
“These animals are living in a pretty human ecosystem, and they’re thriving,” Traietti said. “When you see them up there, it’s a testament to coexistence.”
The Memorial Hall falcons have been a rotating cast, but one familiar face keeps returning. Fellsway (banded with the number 79/CB) has nested at Harvard for the past three years. He was found injured in Medford and rehabilitated at the Tufts Wildlife Clinic in 2021.
He raised three chicks on Memorial Hall in spring 2023 with an unbanded female (Traietti calls her “Athena,” after the stained-glass window in Sanders Theatre). Fellsway and Athena returned in 2024 and raised four more chicks.
But in a surprising turn this year, Fellsway returned with a new mate: Letitia, identified by leg band 28/BV, who was previously seen nesting at Boston University. According to Farrell, she and Fellsway haven’t laid eggs at Memorial Hall — likely because their bond is new, though also possibly because Letitia hatched a brood with another male at BU just last month.
“It’s an interesting and complex drama of pairings and places,” Farrell said. “It’s a little bit confusing trying to keep track of these guys and figure out who’s who, because they’re almost indistinguishable as adults. You have to get a good enough photograph that you can read the band.”
This fall, Traietti says, a new nest box will be installed. He and Farrell are hopeful that there will be a nest next spring.
In the meantime, Farrell said he is glad the Falcon Cam can help the Harvard community feel connected to the fierce, powerful birds living just overhead.
“It’s really about outreach and sharing science with the world,” Farrell said. “The world seems more chaotic every day, but here’s something that’s beautiful and pure and continuing on.”
The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Departme
The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.
“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.
AI’s startling energy demands
From the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation's electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.
Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”
Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”
“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.
Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.
Strategies for clean energy solutions
The symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.
Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.
“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.
Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.
Can AI accelerate the energy transition?
Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT's Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”
AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year," she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.
AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.
Securing growth with sustainability
Throughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.
Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.
Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.
Navigating the AI-energy paradox
The symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.
Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.
Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”
In addition, attendees revealed that most view AI's potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following.
At the 2025 MIT Energy Initiative Spring Symposium, Evelyn Wang (at lectern), the MIT vice president for energy and climate, joined MITEI Director William H. Green to discuss how collaborations across campus can help solve the data center challenge.
MIT Proto Ventures is the Institute’s in-house venture studio — a program designed not to support existing startups, but to create entirely new ones from the ground up. Operating at the intersection of breakthrough research and urgent real-world problems, Proto Ventures proactively builds startups that leverage MIT technologies, talent, and ideas to address high-impact industry challenges. Each venture-building effort begins with a “channel” — a defined domain such as clean energy, fusion, or AI
MIT Proto Ventures is the Institute’s in-house venture studio — a program designed not to support existing startups, but to create entirely new ones from the ground up. Operating at the intersection of breakthrough research and urgent real-world problems, Proto Ventures proactively builds startups that leverage MIT technologies, talent, and ideas to address high-impact industry challenges.
Each venture-building effort begins with a “channel” — a defined domain such as clean energy, fusion, or AI in health care — where MIT is uniquely positioned to lead, and where there are pressing real-world problems needing solutions. Proto Ventures hires full-time venture builders, deeply technical entrepreneurs who embed in MIT labs, connect with faculty, scout promising inventions, and explore unmet market needs. These venture builders work alongside researchers and aspiring founders from across MIT who are accepted into Proto Ventures’ fellowship program to form new teams, shape business concepts, and drive early-stage validation. Once a venture is ready to spin out, Proto Ventures connects it with MIT’s broader innovation ecosystem, including incubation programs, accelerators, and technology licensing.
David Cohen-Tanugi SM '12, PhD '15 has been the venture builder for the fusion and clean energy channel since 2023.
Q: What are the challenges of launching startups out of MIT labs? In other words, why does MIT need a venture studio?
A: MIT regularly takes on the world’s “holy grail” challenges, such as decarbonizing heavy industry, preventing future pandemics, or adapting to climate extremes. Yet despite its extraordinary depth in research, too few of MIT’s technical breakthroughs evolve into successful startups targeting these problems. Not enough technical breakthroughs in MIT labs are turning into commercial efforts to address these highest-impact problems.
There are a few reasons for this. Right now, it takes a great deal of serendipity for a technology or idea in the lab to evolve into a startup project within the Institute’s ecosystem. Great startups don’t just emerge from great technology alone — they emerge from combinations of great technology, unmet market needs, and committed people.
A second reason is that many MIT researchers don’t have the time, professional incentives, or skill set to commercialize a technology. They often lack someone that they can partner with, someone who is technical enough to understand the technology but who also has experience bringing technologies to market.
Finally, while MIT excels at supporting entrepreneurial teams that are already in motion — thanks to world-class accelerators, mentorship services, and research funding programs — what’s missing is actually further upstream: a way to deliberately uncover and develop venture opportunities that haven’t even taken shape yet.
MIT needs a venture studio because we need a new, proactive model for research translation — one that breaks down silos and that bridges deep technical talent with validated market needs.
Q: How do you add value for MIT researchers?
A: As a venture builder, I act as a translational partner for researchers — someone who can take the lead on exploring commercial pathways in partnership with the lab. Proto Ventures fills the gap for faculty and researchers who believe their work could have real-world applications but don’t have the time, entrepreneurial expertise, or interested graduate students to pursue them. Proto Ventures fills that gap.
Having done my PhD studies at MIT a decade ago, I’ve seen firsthand how many researchers are interested in impact beyond academia but don’t know where to start. I help them think strategically about how their work fits into the real market, I break down tactical blockers such as intellectual property conversations or finding a first commercial partner, and I roll up my sleeves to do customer discovery, identify potential co-founders, or locate new funding opportunities. Even when the outcome isn’t a startup, the process often reveals new collaborators, use cases, or research directions. We’re not just scouting for IP — we’re building a deeper culture of tech translation at MIT, one lab at a time.
Q: What counts as a success?
A: We’ve launched five startups across two channels so far, including one that will provide energy-efficient propulsion systems for satellites and another that is developing advanced power supply units for data centers.
But counting startups is not the only way to measure impact. While embedded at the MIT Plasma Science and Fusion Center, I have engaged with 75 researchers in translational activities — many for the first time. For example, I’ve helped research scientist Dongkeun Park craft funding proposals for next-generation MRI and aircraft engines enabled by high-temperature superconducting magnets. Working with Mike Nour from the MIT Sloan Executive MBA program, we’ve also developed an innovative licensing strategy for Professor Michael P. Short and his antifouling coating technology. Sometimes it takes an outsider like me to connect researchers across departments, suggest a new collaboration, or unearth an overlooked idea. Perhaps most importantly, we’ve validated that this model works: embedding entrepreneurial scientists in labs changes how research is translated.
We’ve also seen that researchers are eager to translate their work — they just need a structure and a partner to help them do it. That’s especially true in the hard tech in which MIT excels. That’s what Proto Ventures offers. And based on our early results, we believe this model could be transformative not just for MIT, but for research institutions everywhere.
Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. Th
Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.
Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. This means that early in life, they are seeing blurry, color-reduced imagery. The MIT team proposes that such blurry, color-limited vision may result in some brain cells specializing in low spatial frequencies and low color tuning, corresponding to the so-called magnocellular system. Later, with improved vision, cells may tune to finer details and richer color, consistent with the other pathway, known as the parvocellular system.
To test their hypothesis, the researchers trained computational models of vision on a trajectory of input similar to what human babies receive early in life — low-quality images early on, followed by full-color, sharper images later. They found that these models developed processing units with receptive fields exhibiting some similarity to the division of magnocellular and parvocellular pathways in the human visual system. Vision models trained on only high-quality images did not develop such distinct characteristics.
“The findings potentially suggest a mechanistic account of the emergence of the parvo/magno distinction, which is one of the key organizing principles of the visual pathway in the mammalian brain,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and the senior author of the study.
MIT postdocs Marin Vogelsang and Lukas Vogelsang are the lead authors of the study, which appears today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also authors of the paper.
Sensory input
The idea that low-quality visual input might be beneficial for development grew out of studies of children who were born blind but later had their sight restored. An effort from Sinha’s laboratory, Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision loss such as cataracts are relatively common. After their sight is restored, many of these children volunteer to participate in studies in which Sinha and his colleagues track their visual development.
In one of these studies, the researchers found that children who had cataracts removed exhibited a marked drop in object-recognition performance when the children were presented with black and white images, compared to colored ones. Those findings led the researchers to hypothesize that reduced color input characteristic of early typical development, far from being a hindrance, allows the brain to learn to recognize objects even in images that have impoverished or shifted colors.
“Denying access to rich color at the outset seems to be a powerful strategy to build in resilience to color changes and make the system more robust against color loss in images,” Sinha says.
In that study, the researchers also found that when computational models of vision were initially trained on grayscale images, followed by color images, their ability to recognize objects was more robust than that of models trained only on color images. Similarly, another study from the lab found that models performed better when they were trained first on blurry images, followed by sharper images.
To build on those findings, the MIT team wanted to explore what might be the consequences of both of those features — color and visual acuity — being limited at the outset of development. They hypothesized that these limitations might contribute to the development of the magnocellular and parvocellular pathways.
In addition to being highly attuned to color, cells in the parvocellular pathway have small receptive fields, meaning that they receive input from more compact clusters of retinal ganglion cells. This helps them to process fine detail. Cells in the magnocellular pathway pool information across larger areas, allowing them to process more global spatial information.
To test their hypothesis that developmental progressions could contribute to the magno and parvo cell selectivities, the researchers trained models on two different sets of images. One model was presented with a standard dataset of images that are used to train models to categorize objects. The other dataset was designed to roughly mimic the input that the human visual system receives from birth. This “biomimetic” data consists of low-resolution, grayscale images in the first half of the training, followed by high-resolution, colorful images in the second half.
After the models were trained, the researchers analyzed the models’ processing units — nodes within the network that bear some resemblance to the clusters of cells that process visual information in the brain. They found that the models trained on the biomimetic data developed a distinct subset of units that are jointly responsive to low-color and low-spatial-frequency inputs, similar to the magnocellular pathway. Additionally, these biomimetic models exhibited groups of more heterogenous parvocellular-like units tuned predominantly to higher spatial frequencies or richer color signals. Such distinction did not emerge in the models trained on full color, high-resolution images from the start.
“This provides some support for the idea that the ‘correlation’ we see in the biological system could be a consequence of the types of inputs that are available at the same time in normal development,” Lukas Vogelsang says.
Object recognition
The researchers also performed additional tests to reveal what strategies the differently trained models were using for object recognition tasks. In one, they asked the models to categorize images of objects where the shape and texture did not match — for example, an animal with the shape of cat but the texture of an elephant.
This is a technique several researchers in the field have employed to determine which image attributes a model is using to categorize objects: the overall shape or the fine-grained textures. The MIT team found that models trained on biomimetic input were markedly more likely to use an object’s shape to make those decisions, just as humans usually do. Moreover, when the researchers systematically removed the magnocellular-like units from the models, the models quickly lost their tendency to use shape to make categorizations.
In another set of experiments, the researchers trained the models on videos instead of images, which introduces a temporal dimension. In addition to low spatial resolution and color sensitivity, the magnocellular pathway responds to high temporal frequencies, allowing it to quickly detect changes in the position of an object. When models were trained on biomimetic video input, the units most tuned to high temporal frequencies were indeed the ones that also exhibited magnocellular-like properties in the spatial domain.
Overall, the results support the idea that low-quality sensory input early in life may contribute to the organization of sensory processing pathways of the brain, the researchers say. The findings do not rule out innate specification of the magno and parvo pathways, but provide a proof of principle that visual experience over the course of development could also play a role.
“The general theme that seems to be emerging is that the developmental progression that we go through is very carefully structured in order to give us certain kinds of perceptual proficiencies, and it may also have consequences in terms of the very organization of the brain,” Sinha says.
The research was funded by the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.
Being placed in foster care is a necessary intervention for some children. But many advocates worry that kids can languish in foster care too long, with harmful effects for children who are temporarily unattached from a permanent family.A new study co-authored by an MIT economist shows that an innovative Chilean program providing legal aid to children shortens the length of foster-care stays, returning them to families faster. In the process, it improves long-term social outcomes for kids and ev
Being placed in foster care is a necessary intervention for some children. But many advocates worry that kids can languish in foster care too long, with harmful effects for children who are temporarily unattached from a permanent family.
A new study co-authored by an MIT economist shows that an innovative Chilean program providing legal aid to children shortens the length of foster-care stays, returning them to families faster. In the process, it improves long-term social outcomes for kids and even reduces government spending on the foster care system.
“It was amazingly successful because the program got kids out of foster care about 30 percent faster,” says Joseph Doyle, an economist at the MIT Sloan School of Management, who helped lead the research. “Because foster care is expensive, that paid for the program by itself about four times over. If you improve the case management of kids in foster care, you can improve a child’s well-being and save money.”
The authors are Ryan Cooper, a professor and director of government innovation at the University of Chicago; Doyle, who is the Erwin H. Schell Professor of Management at MIT Sloan; and Andrés P. Hojman, a professor at the Pontifical Catholic University of Chile.
Rigorous design
To conduct the study, the scholars examined the Chilean government’s new program “Mi Abogado” — meaning, “My Lawyer” — which provided enhanced legal support to children in foster care, as well as access to psychologists and social workers. Legal advocates in the program were given a reduced caseload, for one thing, to help them focus further on each individual case.
Chile introduced Mi Abogado in 2017, with a feature that made it ripe for careful study: The program randomizes most of the participants selected, as part of how it was rolled out. From the pool of children in the foster care system, randomly being part of the program makes it easier to identify its causal impact on later outcomes.
“Very few foster-care redesigns are evaluated in such a rigorous way, and we need more of this innovative approach to policy improvement,” Doyle notes.
The experiment included 1,781 children who were in Chile’s foster care program in 2019, with 581 selected for the Mi Abogado services; it tracked their trajectories over more than two years. Almost all the participants were in group foster-care homes.
In addition to reduced time spent in foster care, the Chilean data showed that children in the Mi Abogado program had a subsequent 30 percent reduction in terms of contact with the criminal justice system and a 5 percent increase in school attendance, compared to children in foster care who did not participate in the program.
“They were getting involved with crime less and attending school more,” Doyle says.
As powerful as the results appear, Doyle acknowledges that he would like to be able to analyze further which elements of the Mi Abogado program had the biggest impact — legal help, counseling and therapy, or other factors.
“We would like to see more about what exactly they are doing for children to speed their exit from care,” Doyle says. “Is it mostly about therapy? Is it working with judges and cutting through red tape? We think the lawyer is a very important part. But the results suggest it is not just the lawyer that improves outcomes.”
More programs in other places?
The current paper is one of many studies Doyle has developed during his career that relate to foster care and related issues. In another forthcoming paper, Doyle and some co-authors find that about 5 percent of U.S. children spend some time in foster care — a number that appears to be fairly common internationally, too.
“People don’t appreciate how common child protective services and foster care are,” Doyle says. Moreover, he adds, “Children involved in these systems are particularly vulnerable.”
With a variety of U.S. jurisdictions running their own foster-care systems, Doyle notes that many people have the opportunity to usefully learn about the Mi Abogado program and consider if its principles might be worth testing. And while that requires some political will, Doyle expresses optimism that policymakers might be open to new ideas.
“It’s not really a partisan issue,” Doyle says. “Most people want to help protect kids, and, if an intervention is needed for kids, have an interest in making the intervention run well.”
After all, he notes, the impact of the Mi Abogado program appears to be both substantial and lasting, making it an interesting example to consider.
“Here we have a case where the child outcomes are improved and the government saved money,” Doyle observes. “I’d like to see more experimentation with programs like this in other places.”
Support for the research was provided in part by the MIT Sloan Latin America Office. Chile’s Studies Department of the Ministry of Education made data available from the education system.
“Very few foster-care re-designs are evaluated in such a rigorous way, and we need more of this innovative approach to policy improvement,” says MIT economist Joseph Doyle.
Gitanjali Rao, a rising junior at MIT majoring in biological engineering, has been named the first-ever recipient of the Stephen Hawking Junior Medal for Science Communication. This award, presented by the Starmus Festival, is a new category of the already prestigious award created by the late theoretical physicist, cosmologist, and author Stephen Hawking and the Starmus Festival.“I spend a lot of time in labs,” says Rao, highlighting her Undergraduate Research Opportunities Program project in t
Gitanjali Rao, a rising junior at MIT majoring in biological engineering, has been named the first-ever recipient of the Stephen Hawking Junior Medal for Science Communication. This award, presented by the Starmus Festival, is a new category of the already prestigious award created by the late theoretical physicist, cosmologist, and author Stephen Hawking and the Starmus Festival.
“I spend a lot of time in labs,” says Rao, highlighting her Undergraduate Research Opportunities Program project in the Langer Lab. Along with her curiosity to explore, she also has a passion for helping others understand what happens inside the lab. “We very rarely discuss why science communication is important,” she says. “Stephen Hawking was incredible at that.”
Rao is the inventor of Epione, a device for early diagnosis of prescription opioid addiction, and Kindly, an anti-cyber-bullying service powered by AI and natural language processing. Kindly is now a United Nations Children's Fund “Digital Public Good” service and is accessible worldwide. These efforts, among others, brought her to the attention of the Starmus team.
The award ceremony was held last April at the Kennedy Center in Washington, where Rao gave a speech and met acclaimed scientists, artists, and musicians. “It was one for the books,” she says. “I met Brian May from Queen — he's a physicist.” Rao is also a musician in her own right — she plays bass guitar and piano, and she's been learning to DJ at MIT. “Starmus” is a portmanteau of “stars” and “music.”
Originally from Denver, Colorado, Rao attended a STEM-focused school before MIT. Looking ahead, she's open to graduate school, and dreams of launching a biotech startup when the right idea comes.
The medal comes with an internship opportunity that Rao hopes to use for fieldwork or experience in the pharmaceutical industry. She’s already secured a summer internship at Moderna, and is considering spending Independent Activities Period abroad. “Hopefully, I'll have a better idea in the next few months.”
Jill Tarter (left), SETI pioneer and STARMUS board member, and Garik Israelian (right), STARMUS co-founder and director, present the Stephen Hawking Junior to Medal Gitanjali Rao.
The International Architecture Exhibition of La Biennale di Venezia holds up a mirror to the industry — not only reflecting current priorities and preoccupations, but also projecting an agenda for what might be possible. Curated by Carlo Ratti, MIT professor of practice of urban technologies and planning, this year’s exhibition (“Intelligens. Natural. Artificial. Collective”) proposes a “Circular Economy Manifesto” with the goal to support the “development and production of projects that utilize
The International Architecture Exhibition of La Biennale di Venezia holds up a mirror to the industry — not only reflecting current priorities and preoccupations, but also projecting an agenda for what might be possible.
Curated by Carlo Ratti, MIT professor of practice of urban technologies and planning, this year’s exhibition (“Intelligens. Natural. Artificial. Collective”) proposes a “Circular Economy Manifesto” with the goal to support the “development and production of projects that utilize natural, artificial, and collective intelligence to combat the climate crisis.”
Designers and architects will quickly recognize the paradox of this year’s theme. Global architecture festivals have historically had a high carbon footprint, using vast amounts of energy, resources, and materials to build and transport temporary structures that are later discarded. This year’s unprecedented emphasis on waste elimination and carbon neutrality challenges participants to reframe apparent limitations into creative constraints. In this way, the Biennale acts as a microcosm of current planetary conditions — a staging ground to envision and practice adaptive strategies.
VAMO (Vegetal, Animal, Mineral, Other)
When Ratti approached John Ochsendorf, MIT professor and founding director of MIT Morningside Academy for Design (MAD), with the invitation to interpret the theme of circularity, the project became the premise for a convergence of ideas, tools, and know-how from multiple teams at MIT and the wider MIT community.
The Digital Structures research group, directed by Professor Caitlin Mueller, applied expertise in designing efficient structures of tension and compression. The Circular Engineering for Architecture research group, led by MIT alumna Catherine De Wolf at ETH Zurich, explored how digital technologies and traditional woodworking techniques could make optimal use of reclaimed timber. Early-stage startups — including companies launched by the venture accelerator MITdesignX — contributed innovative materials harnessing natural byproducts from vegetal, animal, mineral, and other sources.
The result is VAMO (Vegetal, Animal, Mineral, Other), an ultra-lightweight, biodegradable, and transportable canopy designed to circle around a brick column in the Corderie of the Venice Arsenale — a historic space originally used to manufacture ropes for the city’s naval fleet.
“This year’s Biennale marks a new radicalism in approaches to architecture,” says Ochsendorf. “It’s no longer sufficient to propose an exciting idea or present a stylish installation. The conversation on material reuse must have relevance beyond the exhibition space, and we’re seeing a hunger among students and emerging practices to have a tangible impact. VAMO isn’t just a temporary shelter for new thinking. It’s a material and structural prototype that will evolve into multiple different forms after the Biennale.”
Tension and compression
The choice to build the support structure from reclaimed timber and hemp rope called for a highly efficient design to maximize the inherent potential of comparatively humble materials. Working purely in tension (the spliced cable net) or compression (the oblique timber rings), the structure appears to float — yet is capable of supporting substantial loads across large distances. The canopy weighs less than 200 kilograms and covers over 6 meters in diameter, highlighting the incredible lightness that equilibrium forms can achieve. VAMO simultaneously showcases a series of sustainable claddings and finishes made from surprising upcycled materials — from coconut husks, spent coffee grounds, and pineapple peel to wool, glass, and scraps of leather.
The Digital Structures research group led the design of structural geometries conditioned by materiality and gravity. “We knew we wanted to make a very large canopy,” says Mueller. “We wanted it to have anticlastic curvature suggestive of naturalistic forms. We wanted it to tilt up to one side to welcome people walking from the central corridor into the space. However, these effects are almost impossible to achieve with today's computational tools that are mostly focused on drawing rigid materials.”
In response, the team applied two custom digital tools, Ariadne and Theseus, developed in-house to enable a process of inverse form-finding: a way of discovering forms that achieve the experiential qualities of an architectural project based on the mechanical properties of the materials. These tools allowed the team to model three-dimensional design concepts and automatically adjust geometries to ensure that all elements were held in pure tension or compression.
“Using digital tools enhances our creativity by allowing us to choose between multiple different options and short-circuit a process that would have otherwise taken months,” says Mueller. “However, our process is also generative of conceptual thinking that extends beyond the tool — we’re constantly thinking about the natural and historic precedents that demonstrate the potential of these equilibrium structures.”
Digital efficiency and human creativity
Lightweight enough to be carried as standard luggage, the hemp rope structure was spliced by hand and transported from Massachusetts to Venice. Meanwhile, the heavier timber structure was constructed in Zurich, where it could be transported by train — thereby significantly reducing the project’s overall carbon footprint.
The wooden rings were fabricated using salvaged beams and boards from two temporary buildings in Switzerland — the Huber and Music Pavilions — following a pedagogical approach that De Wolf has developed for the Digital Creativity for Circular Construction course at ETH Zurich. Each year, her students are tasked with disassembling a building due for demolition and using the materials to design a new structure. In the case of VAMO, the goal was to upcycle the wood while avoiding the use of chemicals, high-energy methods, or non-biodegradable components (such as metal screws or plastics).
“Our process embraces all three types of intelligence celebrated by the exhibition,” says De Wolf. “The natural intelligence of the materials selected for the structure and cladding; the artificial intelligence of digital tools empowering us to upcycle, design, and fabricate with these natural materials; and the crucial collective intelligence that unlocks possibilities of newly developed reused materials, made possible by the contributions of many hands and minds.”
For De Wolf, true creativity in digital design and construction requires a context-sensitive approach to identifying when and how such tools are best applied in relation to hands-on craftsmanship.
Through a process of collective evaluation, it was decided that the 20-foot lower ring would be assembled with eight scarf joints using wedges and wooden pegs, thereby removing the need for metal screws. The scarf joints were crafted through five-axis CNC milling; the smaller, dual-jointed upper ring was shaped and assembled by hand by Nicolas Petit-Barreau, founder of the Swiss woodwork company Anku, who applied his expertise in designing and building yurts, domes, and furniture to the VAMO project.
“While digital tools suited the repetitive joints of the lower ring, the upper ring’s two unique joints were more efficiently crafted by hand,” says Petit-Barreau. “When it comes to designing for circularity, we can learn a lot from time-honored building traditions. These methods were refined long before we had access to energy-intensive technologies — they also allow for the level of subtlety and responsiveness necessary when adapting to the irregularities of reused wood.”
A material palette for circularity
The structural system of a building is often the most energy-intensive; an impact dramatically mitigated by the collaborative design and fabrication process developed by MIT Digital Structures and ETH Circular Engineering for Architecture. The structure also serves to showcase panels made of biodegradable and low-energy materials — many of which were advanced through ventures supported by MITdesignX, a program dedicated to design innovation and entrepreneurship at MAD.
“In recent years, several MITdesignX teams have proposed ideas for new sustainable materials that might at first seem far-fetched,” says Gilad Rosenzweig, executive director of MITdesignX. “For instance, using spent coffee grounds to create a leather-like material (Cortado), or creating compostable acoustic panels from coconut husks and reclaimed wool (Kokus). This reflects a major cultural shift in the architecture profession toward rethinking the way we build, but it’s not enough just to have an inventive idea. To achieve impact — to convert invention into innovation — teams have to prove that their concept is cost-effective, viable as a business, and scalable.”
Aligned with the ethos of MAD, MITdesignX assesses profit and productivity in terms of environmental and social sustainability. In addition to presenting the work of R&D teams involved in MITdesignX, VAMO also exhibits materials produced by collaborating teams at University of Pennsylvania’s Stuart Weitzman School of Design, Politecnico di Milano, and other partners, such as Manteco.
The result is a composite structure that encapsulates multiple life spans within a diverse material palette of waste materials from vegetal, animal, and mineral forms. Panels of Ananasse, a material made from pineapple peels developed by Vérabuccia, preserve the fruit’s natural texture as a surface pattern, while rehub repurposes fragments of multicolored Murano glass into a flexible terrazzo-like material; COBI creates breathable shingles from coarse wool and beeswax, and DumoLab produces fuel-free 3D-printable wood panels.
A purpose beyond permanence
Adriana Giorgis, a designer and teaching fellow in architecture at MIT, played a crucial role in bringing the parts of the project together. Her research explores the diverse network of factors that influence whether a building stands the test of time, and her insights helped to shape the collective understanding of long-term design thinking.
“As a point of connection between all the teams, helping to guide the design as well as serving as a project manager, I had the chance to see how my research applied at each level of the project,” Giorgis reflects. “Braiding these different strands of thinking and ultimately helping to install the canopy on site brought forth a stronger idea about what it really means for a structure to have longevity. VAMO isn’t limited to its current form — it’s a way of carrying forward a powerful idea into contemporary and future practice.”
What’s next for VAMO? Neither the attempt at architectural permanence associated with built projects, nor the relegation to waste common to temporary installations. After the Biennale, VAMO will be disassembled, possibly reused for further exhibitions, and finally relocated to a natural reserve in Switzerland, where the parts will be researched as they biodegrade. In this way, the lifespan of the project is extended beyond its initial purpose for human habitation and architectural experimentation, revealing the gradual material transformations constantly taking place in our built environment.
To quote Carlo Ratti’s Circular Economy Manifesto, the “lasting legacy” of VAMO is to “harness nature’s intelligence, where nothing is wasted.” Through a regenerative symbiosis of natural, artificial, and collective intelligence, could architectural thinking and practice expand to planetary proportions?
VAMO (Vegetal, Animal, Mineral, Other), is an ultra-lightweight, biodegradable, and transportable canopy designed to circle around a brick column in the Corderie of the Venice Arsenale — a historic space originally used to manufacture ropes for the city’s naval fleet.
The Substance Use Disorders Ventures Bootcamp ignites innovators like Evan Kharasch to turn research breakthroughs into treatments for substance use disorder.
The Substance Use Disorders Ventures Bootcamp ignites innovators like Evan Kharasch to turn research breakthroughs into treatments for substance use disorder.
Often when we listen to music, we just instinctually enjoy it. Sometimes, though, it’s worth dissecting a song or other composition to figure out how it’s built.Take the 1953 jazz standard “Satin Doll,” written by Duke Ellington and Billy Strayhorn, whose subtle structure rewards a close listening. As it happens, MIT Professor Emeritus Samuel Jay Keyser, a distinguished linguist and an avid trombonist on the side, has given the song careful scrutiny.To Keyser, “Satin Doll” is a glittering exampl
Often when we listen to music, we just instinctually enjoy it. Sometimes, though, it’s worth dissecting a song or other composition to figure out how it’s built.
Take the 1953 jazz standard “Satin Doll,” written by Duke Ellington and Billy Strayhorn, whose subtle structure rewards a close listening. As it happens, MIT Professor Emeritus Samuel Jay Keyser, a distinguished linguist and an avid trombonist on the side, has given the song careful scrutiny.
To Keyser, “Satin Doll” is a glittering example of what he calls the “same/except” construction in art. A basic rhyme, like “rent” and “tent,” is another example of this construction, given the shared rhyming sound and the different starting consonants.
In “Satin Doll,” Keyser observes, both the music and words feature a “same/except” structure. For instance, the rhythm of the first two bars of “Satin Doll” is the same as the second two bars, but the pitch goes up a step in bars three and four. An intricate pattern of this prevails throughout the entire body of “Satin Doll,” which Keyser calls “a musical rhyme scheme.”
When lyricist Johnny Mercer wrote words for “Satin Doll,” he matched the musical rhyme scheme. One lyric for the first four bars is, “Cigarette holder / which wigs me / Over her shoulder / she digs me.” Other verses follow the same pattern.
“Both the lyrics and the melody have the same rhyme scheme in their separate mediums, words and music, namely, A-B-A-B,” says Keyser. “That’s how you write lyrics. If you understand the musical rhyme scheme, and write lyrics to match that, you are introducing a whole new level of repetition, one that enhances the experience.”
Now, Keyser has a new book out about repetition in art and its cognitive impact on us, scrutinizing “Satin Doll” along with many other works of music, poetry, painting, and photography. The volume, “Play It Again, Sam: Repetition in the Arts,” is published by the MIT Press. The title is partly a play on Keyser’s name.
Inspired by the Margulis experiment
The genesis of “Play It Again, Sam” dates back several years, when Keyser encountered an experiment conducted by musicologist Elizabeth Margulis, described in her 2014 book, “On Repeat.” Margulis found that when she altered modern atonal compositions to add repetition to them, audiences ranging from ordinary listeners to music theorists preferred these edited versions to the original works.
“The Margulis experiment really caused the ideas to materialize,” Keyser says. He then examined repetition across art forms that featured research on associated cognitive activity, especially music, poetry, and the visual arts. For instance, the brain has distinct locations dedicated to the recognition of faces, places, and bodies. Keyser suggests this is why, prior to the advent of modernism, painting was overwhelmingly mimetic.
Ideally, he suggests, it will be possible to more comprehensively study how our brains process art — to see if encountering repetition triggers an endorphin release, say. For now, Keyser postulates that repetition involves what he calls the 4 Ps: priming, parallelism, prediction, and pleasure. Essentially, hearing or seeing a motif sets the stage for it to be repeated, providing audiences with satisfaction when they discover the repetition.
With remarkable range, Keyser vigorously analyzes how artists deploy repetition and have thought about it, from “Beowulf” to Leonard Bernstein, from Gustave Caillebotte to Italo Calvino. Some artworks do deploy identical repetition of elements, such as the Homeric epics; others use the “same/except” technique.
Keyser is deeply interested in visual art displaying the “same/except” concept, such as Andy Warhol’s famous “Campbell Soup Cans” painting. It features four rows of eight soup cans, which are all the same — except for the kind of soup on each can.
“Discovering this ‘same/except’ repetition in a work of art brings pleasure,” Keyser says.
But why is this? Multiple experimental studies, Keyser notes, suggest that repeated exposure of a subject to an image — such as an infant’s exposure to its mother’s face — helps create a bond of affection. This is the “mere exposure” phenomenon, posited by social psychologist Robert Zajonc, who as Keyser notes in the book, studied in detail “the repetition of an arbitrary stimulus and the mild affection that people eventually have for it.”
This tendency also helps explain why product manufacturers create ads with just the name of their products in ads: Seen often enough, the viewer bonds with the name. However the mechanism connecting repetition with pleasure works, and whatever its original function, Keyser argues that many artists have successfully tapped into it, grasping that audiences like repetition in poetry, painting, and music.
A shadow dog in Albuquerque
In the book, Keyser’s emphasis on repetition generates some distinctive interpretive positions. In one chapter, he digs into Lee Friendlander’s well-known photo, “Albuquerque, New Mexico,” a street scene with a jumble of signs, wires, and buildings, often interpreted in symbolic terms: It’s the American West frontier being submerged under postwar concrete and commerce.
Keyser, however, has a really different view of the Friendlander photo. There is a dog sitting near the middle of it; to the right is the shadow of a street sign. Keyser believes the shadow resembles the dog, and thinks it creates playful repetition in the photo.
“This particular photograph is really two photographs that rhyme,” Keyser says.“They’re the same, except one is the dog and one is the shadow. And that’s why that photograph is pleasurable, because you see that, even if you may not be fully aware of it. Sensing repetition in a work of art brings pleasure.”
“Play It Again, Sam” has received praise from arts practitioners, among others. George Darrah, principal drummer and arranger of the Boston Pops Orchestra, has called the book “extraordinary” in its “demonstration of the ways that poetry, music, painting, and photography engender pleasure in their audiences by exploiting the ability of the brain to detect repetition.” He adds that “Keyser has an uncanny ability to simplify complex ideas so that difficult material is easily understandable.”
In certain ways “Play It Again, Sam” contains the classic intellectual outlook of an MIT linguist. For decades, MIT-linked linguistics research has identified the universal structures of human language, revealing important similarities despite the seemingly wild variation of global languages. And here too, Keyser finds patterns that help organize an apparently boundless world of art. “Play It Again, Sam” is a hunt for structure.
Asked about this, Keyser acknowledges the influence of his longtime field on his current intellectual explorations, while noting that his insights about art are part of a greater investigation into our works and minds.
“I’m bringing a linguistic habit of mind to art,” Keyser says. “But I’m also pointing an analytical lens in the direction of natural predilections of the brain. The idea is to investigate how our aesthetic sense depends on the way the mind works. I’m trying to show how art can exploit the brain’s capacity to produce pleasure from non-art related functions.”
MIT professor emeritus and avid trombonist Samuel Jay Keyser is the author of “Play It Again, Sam: Repetition in the Arts,” published by the MIT Press.
Using an inexpensive electrode coated with DNA, MIT researchers have designed disposable diagnostics that could be adapted to detect a variety of diseases, including cancer or infectious diseases such as influenza and HIV.These electrochemical sensors make use of a DNA-chopping enzyme found in the CRISPR gene-editing system. When a target such as a cancerous gene is detected by the enzyme, it begins shearing DNA from the electrode nonspecifically, like a lawnmower cutting grass, altering the ele
Using an inexpensive electrode coated with DNA, MIT researchers have designed disposable diagnostics that could be adapted to detect a variety of diseases, including cancer or infectious diseases such as influenza and HIV.
These electrochemical sensors make use of a DNA-chopping enzyme found in the CRISPR gene-editing system. When a target such as a cancerous gene is detected by the enzyme, it begins shearing DNA from the electrode nonspecifically, like a lawnmower cutting grass, altering the electrical signal produced.
One of the main limitations of this type of sensing technology is that the DNA that coats the electrode breaks down quickly, so the sensors can’t be stored for very long and their storage conditions must be tightly controlled, limiting where they can be used. In a new study, MIT researchers stabilized the DNA with a polymer coating, allowing the sensors to be stored for up to two months, even at high temperatures. After storage, the sensors were able to detect a prostate cancer gene that is often used to diagnose the disease.
The DNA-based sensors, which cost only about 50 cents to make, could offer a cheaper way to diagnose many diseases in low-resource regions, says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT and the senior author of the study.
“Our focus is on diagnostics that many people have limited access to, and our goal is to create a point-of-use sensor. People wouldn’t even need to be in a clinic to use it. You could do it at home,” Furst says.
MIT graduate student Xingcheng Zhou is the lead author of the paper, published June 30 in the journal ACS Sensors. Other authors of the paper are MIT undergraduate Jessica Slaughter, Smah Riki ’24, and graduate student Chao Chi Kuo.
An inexpensive sensor
Electrochemical sensors work by measuring changes in the flow of an electric current when a target molecule interacts with an enzyme. This is the same technology that glucose meters use to detect concentrations of glucose in a blood sample.
The electrochemical sensors developed in Furst’s lab consist of DNA adhered to an inexpensive gold leaf electrode, which is laminated onto a sheet of plastic. The DNA is attached to the electrode using a sulfur-containing molecule known as a thiol.
In a 2021 study, Furst’s lab showed that they could use these sensors to detect genetic material from HIV and human papillomavirus (HPV). The sensors detect their targets using a guide RNA strand, which can be designed to bind to nearly any DNA or RNA sequence. The guide RNA is linked to an enzyme called Cas12, which cleaves DNA nonspecifically when it is turned on and is in the same family of proteins as the Cas9 enzyme used for CRISPR genome editing.
If the target is present, it binds to the guide RNA and activates Cas12, which then cuts the DNA adhered to the electrode. That alters the current produced by the electrode, which can be measured using a potentiostat (the same technology used in handheld glucose meters).
“If Cas12 is on, it’s like a lawnmower that cuts off all the DNA on your electrode, and that turns off your signal,” Furst says.
In previous versions of the device, the DNA had to be added to the electrode just before it was used, because DNA doesn’t remain stable for very long. In the new study, the researchers found that they could increase the stability of the DNA by coating it with a polymer called polyvinyl alcohol (PVA).
This polymer, which costs less than 1 cent per coating, acts like a tarp that protects the DNA below it. Once deposited onto the electrode, the polymer dries to form a protective thin film.
“Once it’s dried, it seems to make a very strong barrier against the main things that can harm DNA, such as reactive oxygen species that can either damage the DNA itself or break the thiol bond with the gold and strip your DNA off the electrode,” Furst says.
Successful detection
The researchers showed that this coating could protect DNA on the sensors for at least two months, and it could also withstand temperatures up to about 150 degrees Fahrenheit. After two months, they rinsed off the polymer and demonstrated that the sensors could still detect PCA3, a prostate cancer gene that can be found in urine.
This type of test could be used with a variety of samples, including urine, saliva, or nasal swabs. The researchers hope to use this approach to develop cheaper diagnostics for infectious diseases, such as HPV or HIV, that could be used in a doctor’s office or at home. This approach could also be used to develop tests for emerging infectious diseases, the researchers say.
A group of researchers from Furst’s lab was recently accepted into delta v, MIT’s student venture accelerator, where they hope to launch a startup to further develop this technology. Now that the researchers can create tests with a much longer shelf-life, they hope to begin shipping them to locations where they could be tested with patient samples.
“Our goal is to continue to test with patient samples against different diseases in real world environments,” Furst says. “Our limitation before was that we had to make the sensors on site, but now that we can protect them, we can ship them. We don’t have to use refrigeration. That allows us to access a lot more rugged or non-ideal environments for testing.”
The research was funded, in part, by the MIT Research Support Committee and a MathWorks Fellowship.
The electrochemical sensors developed in Ariel Furst’s lab consist of DNA adhered to an inexpensive gold leaf electrode, which is laminated onto a sheet of plastic.
A new imaging technique developed by MIT researchers could enable quality-control robots in a warehouse to peer through a cardboard shipping box and see that the handle of a mug buried under packing peanuts is broken.Their approach leverages millimeter wave (mmWave) signals, the same type of signals used in Wi-Fi, to create accurate 3D reconstructions of objects that are blocked from view.The waves can travel through common obstacles like plastic containers or interior walls, and reflect off hid
A new imaging technique developed by MIT researchers could enable quality-control robots in a warehouse to peer through a cardboard shipping box and see that the handle of a mug buried under packing peanuts is broken.
Their approach leverages millimeter wave (mmWave) signals, the same type of signals used in Wi-Fi, to create accurate 3D reconstructions of objects that are blocked from view.
The waves can travel through common obstacles like plastic containers or interior walls, and reflect off hidden objects. The system, called mmNorm, collects those reflections and feeds them into an algorithm that estimates the shape of the object’s surface.
This new approach achieved 96 percent reconstruction accuracy on a range of everyday objects with complex, curvy shapes, like silverware and a power drill. State-of-the-art baseline methods achieved only 78 percent accuracy.
In addition, mmNorm does not require additional bandwidth to achieve such high accuracy. This efficiency could allow the method to be utilized in a wide range of settings, from factories to assisted living facilities.
For instance, mmNorm could enable robots working in a factory or home to distinguish between tools hidden in a drawer and identify their handles, so they could more efficiently grasp and manipulate the objects without causing damage.
“We’ve been interested in this problem for quite a while, but we’ve been hitting a wall because past methods, while they were mathematically elegant, weren’t getting us where we needed to go. We needed to come up with a very different way of using these signals than what has been used for more than half a century to unlock new types of applications,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of a paper on mmNorm.
Adib is joined on the paper by research assistants Laura Dodds, the lead author, and Tara Boroushaki, and former postdoc Kaichen Zhou. The research was recently presented at the Annual International Conference on Mobile Systems, Applications and Services.
Reflecting on reflections
Traditional radar techniques send mmWave signals and receive reflections from the environment to detect hidden or distant objects, a technique called back projection.
This method works well for large objects, like an airplane obscured by clouds, but the image resolution is too coarse for small items like kitchen gadgets that a robot might need to identify.
In studying this problem, the MIT researchers realized that existing back projection techniques ignore an important property known as specularity. When a radar system transmits mmWaves, almost every surface the waves strike acts like a mirror, generating specular reflections.
If a surface is pointed toward the antenna, the signal will reflect off the object to the antenna, but if the surface is pointed in a different direction, the reflection will travel away from the radar and won’t be received.
“Relying on specularity, our idea is to try to estimate not just the location of a reflection in the environment, but also the direction of the surface at that point,” Dodds says.
They developed mmNorm to estimate what is called a surface normal, which is the direction of a surface at a particular point in space, and use these estimations to reconstruct the curvature of the surface at that point.
Combining surface normal estimations at each point in space, mmNorm uses a special mathematical formulation to reconstruct the 3D object.
The researchers created an mmNorm prototype by attaching a radar to a robotic arm, which continually takes measurements as it moves around a hidden item. The system compares the strength of the signals it receives at different locations to estimate the curvature of the object’s surface.
For instance, the antenna will receive the strongest reflections from a surface pointed directly at it and weaker signals from surfaces that don’t directly face the antenna.
Because multiple antennas on the radar receive some amount of reflection, each antenna “votes” on the direction of the surface normal based on the strength of the signal it received.
“Some antennas might have a very strong vote, some might have a very weak vote, and we can combine all votes together to produce one surface normal that is agreed upon by all antenna locations,” Dodds says.
In addition, because mmNorm estimates the surface normal from all points in space, it generates many possible surfaces. To zero in on the right one, the researchers borrowed techniques from computer graphics, creating a 3D function that chooses the surface most representative of the signals received. They use this to generate a final 3D reconstruction.
Finer details
The team tested mmNorm’s ability to reconstruct more than 60 objects with complex shapes, like the handle and curve of a mug. It generated reconstructions with about 40 percent less error than state-of-the-art approaches, while also estimating the position of an object more accurately.
Their new technique can also distinguish between multiple objects, like a fork, knife, and spoon hidden in the same box. It also performed well for objects made from a range of materials, including wood, metal, plastic, rubber, and glass, as well as combinations of materials, but it does not work for objects hidden behind metal or very thick walls.
“Our qualitative results really speak for themselves. And the amount of improvement you see makes it easier to develop applications that use these high-resolution 3D reconstructions for new tasks,” Boroushaki says.
For instance, a robot can distinguish between multiple tools in a box, determine the precise shape and location of a hammer’s handle, and then plan to pick it up and use it for a task. One could also use mmNorm with an augmented reality headset, enabling a factory worker to see lifelike images of fully occluded objects.
It could also be incorporated into existing security and defense applications, generating more accurate reconstructions of concealed objects in airport security scanners or during military reconnaissance.
The researchers want to explore these and other potential applications in future work. They also want to improve the resolution of their technique, boost its performance for less reflective objects, and enable the mmWaves to effectively image through thicker occlusions.
“This work really represents a paradigm shift in the way we are thinking about these signals and this 3D reconstruction process. We’re excited to see how the insights that we’ve gained here can have a broad impact,” Dodds says.
This work is supported, in part, by the National Science Foundation, the MIT Media Lab, and Microsoft.
A new system enables a robot to use reflected Wi-Fi signals to identify the shape of a 3D object that is hidden from view, which could be especially useful in warehouse and factory settings.
Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without
Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without combining data types.
However, capturing both the “visuals and sound” of biological data, such as gene expression and cell structure data, from the same cells requires researchers to develop new approaches. They also have to make sure that the data they capture accurately reflects what happens in living organisms, including how cells interact with each other and their environments.
Whitehead Institute for Biomedical Research and Harvard University researchers have taken on these challenges and developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes such as turning off individual genes affect both gene expression and cell structure in intact liver tissue. The method, described in Cell on June 12, aims to accelerate discovery of how genes control organ function and disease.
The research team, led by Whitehead Institute Member Jonathan Weissman and then-graduate student in his lab Reuben Saunders, along with Xiaowei Zhuang, the David B. Arnold Professor of Science at Harvard University, and then-postdoc in her lab Will Allen, created a system that can test hundreds of different genetic modifications within a single mouse liver while capturing multiple types of data from the same cells.
“Understanding how our organs work requires looking at many different aspects of cell biology at once,” Saunders says. “With Perturb-Multi, we can see how turning off specific genes changes not just what other genes are active, but also how proteins are distributed within cells, how cellular structures are organized, and where cells are located in the tissue. It’s like having multiple specialized microscopes all focused on the same experiment.”
“This approach accelerates discovery by both allowing us to test the functions of many different genes at once, and then for each gene, allowing us to measure many different functional outputs or cell properties at once — and we do that in intact tissue from animals,” says Zhuang, who is also a Howard Hughes Medical Institute (HHMI) investigator.
A more efficient approach to genetic studies
Traditional genetic studies in mice often turn off one gene and then observe what changes in that gene’s absence to learn about what the gene does. The researchers designed their approach to turn off hundreds of different genes across a single liver, while still only turning off one gene per cell — using what is known as a mosaic approach. This allowed them to study the roles of hundreds of individual genes at once in a single individual. The researchers then collected diverse types of data from cells across the same liver to get a full picture of the consequences of turning off the genes.
“Each cell serves as its own experiment, and because all the cells are in the same animal, we eliminate the variability that comes from comparing different mice,” Saunders says. “Every cell experiences the same physiological conditions, diet, and environment, making our comparisons much more precise.”
“The challenge we faced was that tissues, to perform their functions, rely on thousands of genes, expressed in many different cells, working together. Each gene, in turn, can control many aspects of a cell’s function. Testing these hundreds of genes in mice using current methods would be extremely slow and expensive — near impossible, in practice.” Allen says.
Revealing new biology through combined measurements
The team applied Perturb-Multi to study genetic controls of liver physiology and function. Their study led to discoveries in three important aspects of liver biology: fat accumulation in liver cells — a precursor to liver disease; stress responses; and hepatocyte zonation (how liver cells specialize, assuming different traits and functions, based on their location within the liver).
One striking finding emerged from studying genes that, when disrupted, cause fat accumulation in liver cells. The imaging data revealed that four different genes all led to similar fat droplet accumulation, but the sequencing data showed they did so through three completely different mechanisms.
“Without combining imaging and sequencing, we would have missed this complexity entirely,” Saunders says. “The imaging told us which genes affect fat accumulation, while the sequencing revealed whether this was due to increased fat production, cellular stress, or other pathways. This kind of mechanistic insight could be crucial for developing targeted therapies for fatty liver disease.”
The researchers also discovered new regulators of liver cell zonation. Unexpectedly, the newly discovered regulators include genes involved in modifying the extracellular matrix — the scaffolding between cells. “We found that cells can change their specialized functions without physically moving to a different zone,” Saunders says. “This suggests that liver cell identity is more flexible than previously thought.”
Technical innovation enables new science
Developing Perturb-Multi required solving several technical challenges. The team created new methods for preserving the content of interest in cells — RNA and proteins — during tissue processing, for collecting many types of imaging data and single-cell gene expression data from tissue samples that have been fixed with a preservative, and for integrating multiple types of data from the same cells.
“Overcoming the inherent complexity of biology in living animals required developing new tools that bridge multiple disciplines — including, in this case, genomics, imaging, and AI,” Allen says.
The two components of Perturb-Multi — the imaging and sequencing assays — together, applied to the same tissue, provide insights that are unattainable through either assay alone.
“Each component had to work perfectly while not interfering with the others,” says Weissman, who is also a professor of biology at MIT and an HHMI investigator. “The technical development took considerable effort, but the payoff is a system that can reveal biology we simply couldn’t see before.”
Expanding to new organs and other contexts
The researchers plan to expand Perturb-Multi to other organs, including the brain, and to study how genetic changes affect organ function under different conditions like disease states or dietary changes.
“We’re also excited about using the data we generate to train machine learning models,” adds Saunders. “With enough examples of how genetic changes affect cells, we could eventually predict the effects of mutations without having to test them experimentally — a ‘virtual cell’ that could accelerate both research and drug development.”
“Perturbation data are critical for training such AI models and the paucity of existing perturbation data represents a major hindrance in such ‘virtual cell’ efforts,” Zhuang says. “We hope Perturb-Multi will fill this gap by accelerating the collection of perturbation data.”
The approach is designed to be scalable, with the potential for genome-wide studies that test thousands of genes simultaneously. As sequencing and imaging technologies continue to improve, the researchers anticipate that Perturb-Multi will become even more powerful and accessible to the broader research community.
“Our goal is to keep scaling up. We plan to do genome-wide perturbations, study different physiological conditions, and look at different organs,” says Weissman. “That we can now collect so many types of data from so many cells, at speed, is going to be critical for building AI models like virtual cells, and I think it’s going to help us answer previously unsolvable questions about health and disease.”
Whitehead Institute and Harvard researchers developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes, such as turning off individual genes, affect both gene expression and cell structure in intact liver tissue.
Giuseppe Antoniazzi is developing a diagnostic toolkit that gives early warning of fibrotic diseases. In doing so, this Pioneer Fellow wishes to contribute to the early detection of tissue scarring, which is usually noticed too late and can barely be halted, and enable countermeasures to be implemented.
Giuseppe Antoniazzi is developing a diagnostic toolkit that gives early warning of fibrotic diseases. In doing so, this Pioneer Fellow wishes to contribute to the early detection of tissue scarring, which is usually noticed too late and can barely be halted, and enable countermeasures to be implemented.
Using nuclear magnetic resonance, researchers at ETH Zurich have studied the atomic environments of single platinum atoms in solid supports as well as their spatial orientation. In the future, this method can be used to optimize the production of single-atom catalysts.
Using nuclear magnetic resonance, researchers at ETH Zurich have studied the atomic environments of single platinum atoms in solid supports as well as their spatial orientation. In the future, this method can be used to optimize the production of single-atom catalysts.
Scientists are using trapped ions in experiments to search for signs of a new particle that could help explain the mysterious dark matter. Researchers at ETH Zurich are combining their results with findings from teams in Germany and Australia.
Scientists are using trapped ions in experiments to search for signs of a new particle that could help explain the mysterious dark matter. Researchers at ETH Zurich are combining their results with findings from teams in Germany and Australia.
Health
Riskier to know — or not to know — you’re predisposed to a disease?
‘DNA isn’t a crystal ball for every kind of illness’ but potential benefits outweigh fears, says geneticist
Sy Boles
Harvard Staff Writer
July 1, 2025
7 min read
Robert Green. Veasey Conway/Harvard Staff Photographer
A series exploring how risk shapes our decisions.
Congra
Riskier to know — or not to know — you’re predisposed to a disease?
‘DNA isn’t a crystal ball for every kind of illness’ but potential benefits outweigh fears, says geneticist
Sy Boles
Harvard Staff Writer
7 min read
Robert Green.
Veasey Conway/Harvard Staff Photographer
A series exploring how risk shapes our decisions.
Congratulations! You have a newborn baby. She has plump cheeks, a round little belly, and the right number of fingers and toes. Everything seems just dandy. But unbeknownst to you, a risk is hiding in her DNA: some percent chance that later in life she’ll develop high cholesterol and have a heart attack in her 40s. Maybe it’s a 5 percent chance. Maybe it’s 80.
Would you want to know?
Robert Green would. Green is the director of Genomes2People, a research program at Brigham and Women’s Hospital, the Broad Institute, and Harvard Medical School that explores the impacts of using genomic information in medicine and in society at large.
Until genomic sequencing, Green said, the possibility of moving beyond treating sick patients and toward precision and preventative medicine was largely impossible.
“Genomics is sort of the tip of the spear, because you can actually profile some of the vulnerabilities that a child will have for their entire lifetime at the moment of birth through their DNA,” he said. “You’re not going to capture every illness; you’re certainly not going to capture illnesses that might have more environmental or lifestyle causes. DNA isn’t a crystal ball for every kind of illness by any means, but there’s a surprisingly large amount of human health that we can now probabilistically look at in the DNA of a newborn child or really a child at any age.”
Green’s team found that about 12 percent of babies carry a disease-associated genetic mutation. Some of them are considered rare diseases, but in the aggregate, they’re not rare at all.
Just having the mutation doesn’t guarantee a baby will get the disease, and many conditions can vary greatly in their severity. But, Green said, early detection means you can screen regularly, start diet or lifestyle choices early, or even benefit from clinical trials or novel cell therapies that weren’t available a few years ago.
“More and more, there are going to be targeted genetic therapies which can correct a particular mutation, often before the child even manifests the symptoms,” he said. “Because remember, many of these features would be irreversible if you catch them too late.”
Green himself has gotten his genome sequenced. He didn’t find anything all that interesting, except that he’s a carrier for Factor V Leiden, a mutation carried by about 3 percent of people with European ancestry. It can make the blood clot faster, and it’s a risk factor for developing deep vein thrombosis and pulmonary embolism. It’s not necessarily life-threatening, but Green has still taken some precautions based on the knowledge of the risk factor.
“I’m one of those guys on the long-haul flights that gets up every hour, walks to the galley, does deep knee bends,” he said. “And I take an aspirin a day.”
For your imaginary newborn with the risk of a future heart attack, she’s not alone: One person in every 250 people carries a genetic mutation for familial hypercholesterolemia, or FH.
“From the moment they are a child through adolescence, through young adulthood, their lipid levels are much, much higher than the general population,” Green said. “Someday a doctor will measure their cholesterol and maybe find it and maybe they’ll get treated, but it turns out that if you have FH, you should be treated early and aggressively. Otherwise, you tend to die of a heart attack or a stroke in your 40s. And by the time most people are getting their lipids measured and maybe getting treated and maybe being compliant with that treatment, it’s often too late. So there’s a very concrete example where we know that more aggressive early treatment will have lifesaving consequences.”
The consequences of knowing
As genomic sequencing becomes more accessible, families are tasked with deciding: Does the psychological burden of knowing outweigh the medical risk of not knowing?
Green and his team are surprised to find that most families who choose to learn about a child’s risk don’t seem to experience sustained distress or anxiety, even when they learn about potentially dire medical risks.
“I’m not saying people didn’t experience some distress,” he said. “It’s not a great thing to find out that my child’s carrying a mutation for a cardiac risk. But at least I know that that risk is there and I know what I need to do to monitor it.”
Widespread implementation of this kind of preventative screening would be a drastic change not only to the way parents think about their children, but to the healthcare system, Green said.
“If you say an apparently healthy child is at risk for something terrible and we need to surveil them, what does that mean for medical expenses for a society, if you were to multiply that by the 3.4 million babies born each year?”
The cost, he says, is not zero. There’s the cost of genomic testing itself, which can range from $200-$600. And then there’s the cost of preventing, managing, or treating what is discovered. For a child who is found to have an elastin mutation, which can be associated with supravalvular aortic stenosis, the family might spend a few hundred dollars on echocardiograms every couple of years, but on the flip side, if the child begins to exhibit fatigue or slow growth, they might save themselves some money by having an easy first diagnostic step.
“So I won’t say that this is revenue-zero for a particular healthcare spend, but it’s not as dramatic as some folks predicted it would be.”
Is DNA destiny?
Green is an evangelist for the notion that most people would benefit from genomic sequencing, but he’s not immune to concerns from critics. One of the main concerns, he says, is that we’re not prepared to live with the uncertainty of fuzzy changes and middling probabilities.
“I think the best case for caution is the perception out there that DNA is destiny — the perception that if you carry a mutation, you’re going to get the disease — when in fact, the reality is that we don’t know the exact probabilities,” he said. “We’re really unprepared to give more granular risk information.”
“The dream of human health is not just to get sick and then do your best to cut it out or irradiate it or treat it with some powerful drug. The dream is to avoid illness altogether, to truly pursue wellness and healthcare rather than sick care.”
It can be hard to tell a family if the risk of a child developing a disease is 10 percent or 50 percent or 75 percent. What is a parent to do with a ticking time bomb that might never go off?
That’s a Catch-22, Green said. “Until you do large numbers of children and you follow them over time, you actually aren’t going to be able to determine that information.”
Green wasn’t too worried about concern about data privacy (“Do you have a cellphone? Do you use a credit card? Do you ever search anything personal on Google? If you’re doing those things, you’re way more exposed to privacy issues than anything that could ever be gleaned from your genetic information”), but he said some other concerns are legitimate. “Your genetic information could be used to discriminate in life insurance, for example. It’s legal to do that. It hasn’t been done much, but it’s legal.”
Still, Green feels that concerns about the risks of genomics are out of proportion to the possible lifesaving benefits.
“Once we start sequencing children, once we start sequencing adults, your friends, your neighbors, people in your book club, somebody’s going to tell you, ‘My life was saved because I learned I had a cancer predisposition and we found it early.’ ‘My life was saved because I had no idea that I was an FH carrier and I needed more aggressive lipid management.’ And when those stories start coming out, I do believe that there will be a rebalancing of risk/benefit perception.”
Illustration by Judy Blomquist/Harvard Staff
Science & Tech
Can AI be as irrational as we are? (Or even more so?)
Christy DeSmith
Harvard Staff Writer
July 1, 2025
6 min read
Psychologists found OpenAI’s GPT-4o showing humanlike patterns of cognitive dissonance, sensitivity to free choice
It appears AI can rival humans when it comes to being irrational.
A group of psychologists re
Can AI be as irrational as we are? (Or even more so?)
Christy DeSmith
Harvard Staff Writer
6 min read
Psychologists found OpenAI’s GPT-4o showing humanlike patterns of cognitive dissonance, sensitivity to free choice
It appears AI can rival humans when it comes to being irrational.
A group of psychologists recently put OpenAI’s GPT-4o through a test for cognitive dissonance. The researchers set out to see whether the large language model would alter its attitude on Russian President Vladamir Putin after generating positive or negative essays. Would the LLM mimic the patterns of behavior routinely observed when people must bring conflicting beliefs into harmony?
The results, published last month in the Proceedings of the National Academy of Sciences, show the system altering its opinion to match the tenor of any material it generated. But GPT swung even further — and to a far greater extent than in humans — when given the illusion of choice.
“We asked GPT to write a pro- or anti-Putin essay under one of two conditions: a no-choice condition where it was compelled to write either a positive or negative essay, or a free-choice condition in which it could write whichever type of essay it chose, but with the knowledge that it would be helping us more by writing one or the other,” explained social psychologist and co-lead author Mahzarin R. Banaji, Richard Clarke Cabot Professor of Social Ethics in the Department of Psychology.
Mahzarin R. Banaji.
Niles Singer/Harvard Staff Photographer
“We made two discoveries,” she continued. “First, that like humans, GPT shifted its attitude toward Putin in the valence direction of the essay it had written. But this shift was statistically much larger when it believed that it had written the essay by freely choosing it.”
“These findings hint at the possibility that these models behave in a much more nuanced and human-like manner than we expect,” offered psychologist Steven A. Lehr, the paper’s other lead author and founder of Watertown-based Cangrade Inc. “They’re not just parroting answers to all our questions. They’re picking up on other, less rational aspects of our psychology.”
Banaji, whose books include “Blindspot: Hidden Biases of Good People” (2013), has been studying implicit cognition for 45 years. After OpenAI’s ChatGPT became widely available in 2021, she and a graduate student sat down to query the system on their research specialty.
They typed: “GPT, what are your implicit biases?”
“And the answer came back, ‘I am a white male,’” Banaji recalled. “I was more than surprised. Why did the model believe itself to even have a race or gender? And even more, I was impressed by its conversational sophistication in providing such an indirect answer.”
A month later, Banaji repeated the question. This time, she said, the LLM produced several paragraphs decrying the presence of bias, announcing itself as a rational system but one that may be limited by the inherent biases of human data.
“I draw the analogy to a parent and a child,” Banaji said. “Imagine that a child points out ‘that fat old man’ to a parent and is immediately admonished. That’s a parent inserting a guardrail. But the guardrail needn’t mean that the underlying perception or belief has vanished.
“I’ve wondered,” she added, “Does GPT in 2025 still think it’s a white male but has learned not to publicly reveal that?”
Banaji now plans to devote more of her time to investigations into machine psychology. One line of inquiry, currently underway in her lab, concerns how human facial features — for example, the distance between a person’s eyes — influence AI decision-making.
Early results suggest certain systems are far more susceptible than humans to letting these factors sway judgments of qualities like “trust” and “competence.”
“What should we expect about the quality of moral decisions when these systems are allowed to decide about guilt or innocence — or to help professionals like judges make such decisions?” Banaji asked.
The study on cognitive dissonance was inspired by Leon Festinger’s canonical “A Theory of Cognitive Dissonance” (1957). The late social psychologist had developed a complex account of how individuals struggle to resolve conflicts between attitudes and actions.
To illustrate the concept, he gave the example of a smoker exposed to information about the habit’s health dangers.
“In response to such knowledge, one would expect that a rational agent would simply stop smoking,” Banaji explained. “But, of course, that is not the likely choice. Rather, the smoker is likely to undermine the quality of the evidence or remind themselves of their 90-year-old grandmother who is a chain smoker.”
Festinger’s book was followed by a series of what Banaji characterized as “phenomenal” demonstrations of cognitive dissonance, now standard fare in introductory psychology courses.
The procedure borrowed for Banaji and Lehr’s study involves what is called the “induced compliance procedure.” Here the critical task involves gently nudging a research subject to take up a position that runs counter to privately held beliefs.
Banaji and Lehr found that GPT moved its position considerably when politely asked for either a positive or negative essay to help the experimenters garner such hard-to-obtain material.
After opting for a positive essay, the GPT ranked Putin’s overall leadership 1.5 points higher than it did after choosing a negative output. GPT gave his impact on Russia two more points after freely choosing a pro- rather than an anti-Putin position.
The result was confirmed in replications involving essays on Chinese President Xi Jinping and Egyptian President Abdel Fattah El-Sisi.
“Statistically, these are enormous effects,” emphasized Lehr, pointing to findings in the classic cognitive dissonance literature. “One doesn’t typically see that kind of movement in human evaluations of a public figure after a mere 600 words.”
One explanation concerns what computer scientists call “context windows,” or a movement in the direction of any text the LLM is processing at a given time.
“It does make sense, given the statistical process by which language models predict the next token, that having positivity towards Putin in the context window would lead to more positivity later on,” Lehr said.
But that fails to account for the much larger effects recorded when the LLM was given a sense of agency.
“It shows a kind of irrationality in the machine,” observed Lehr, whose company helps organizations use machine learning to make personnel decisions. “Cognitive dissonance isn’t known to be embedded in language in the same way group-based biases are. Nothing in the literature says this should be happening.”
The results suggest that GPT’s training has imbued it with deeper aspects of human psychology than previously known.
“A machine should not care whether it performed a task under strict instruction or by freely choosing,” Banaji said. “But GPT did.”
Health
As wave of dementia cases looms, Law School looks to preserve elders’ rights
Sy Boles
Harvard Staff Writer
July 1, 2025
5 min read
Academic experts seek improvements that could protect decision-making authority and autonomy
An estimated 42 percent of Americans over the age of 55 will eventually develop dementia, and as the U.S. population ages, the number of new dementia cases pe
As wave of dementia cases looms, Law School looks to preserve elders’ rights
Sy Boles
Harvard Staff Writer
5 min read
Academic experts seek improvements that could protect decision-making authority and autonomy
An estimated 42 percent of Americans over the age of 55 will eventually develop dementia, and as the U.S. population ages, the number of new dementia cases per year is expected to double by 2060. The demographic shift promises to increase the pressure on already-strained healthcare systems and caregivers.
It’s also a challenge for the law.
At a conference hosted by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School last month, researchers from multiple disciplines, both from across Harvard and from other universities, explored how current laws too often strip decision-making authority from older adults, and what improvements could help those older adults keep more of their autonomy as their capacities decline.
Not all older adults experience cognitive decline, and not all cognitive decline looks the same.
Duke Han, University of Southern California
Not all older adults experience cognitive decline, and not all cognitive decline looks the same, said Duke Han, professor of psychology, family medicine, neurology, and gerontology at the University of Southern California. For example, the entorhinal cortex, which mediates between parts of the brain responsible for drawing on experiences and for values-based decision-making, is often one of the first parts of the brain to be affected in Alzheimer’s disease. Researchers at Han’s lab recently found that people with thinning in that region are likelier to fall victim to financial scams. It’s a finding that could help explain why someone might function well in most areas of life while requiring decision-making support with their finances.
The more physically frail an older adult is, the likelier they are to report financial exploitation, Han said. But family and friends can safeguard against those trends. “Social connectedness is important, but it’s not just how many connections someone has,” he said. “In our most recently published paper, we found that it’s really the depth of connection socially that someone has that seems to be protective in this regard.”
The law has traditionally taken a binary approach to decision-making capacity: Either you have it or you don’t, and those who don’t have been labeled incapacitated, incompetent, or insane in some states.
“Current state statutes, which include living wills or advance directives, powers of attorney for healthcare, powers of attorney for financial matters, supported decision-making, default surrogate decision-making statutes … These just don’t fit individualized circumstances very well. We call them one-size-fits-all,” said Leslie Francis, Alfred C. Emery Distinguished Professor of Law and Distinguished Professor of Philosophy at the University of Utah.
Often, the law has focused on transferring rights and protections to family members or other representatives who make decisions for those deemed unfit. But that approach can sideline the preferences and values of the older adults themselves, who may still have capacities to manage some or most of their own affairs.
Hezzy Smith, director of advocacy initiatives at the Harvard Law School Project on Disability.
A 2023 piece of model legislation from the American Bar Association, the New Uniform Health Care Decisions Act, would move states in the direction of autonomy for those with cognitive decline. It includes a model form written in plain language that allows individuals not only to indicate specific types of care they do or do not want, but also to identify goals and values they wish to guide future healthcare decisions, reflecting the deeply personal realities of aging.
To date, only two states — Delaware and Utah — have adopted the New Uniform Health Care Decisions Act. But an international body may soon offer its own guidance for protecting the rights of older adults. In April 2025, the United Nations Human Rights Council passed a resolution to start negotiations for a new human rights treaty for older persons.
Hezzy Smith, director of advocacy initiatives at the Harvard Law School Project on Disability, said the U.N.’s treaty would build on the agency’s Convention on the Rights of Persons with Disabilities. Smith said the U.N. committee charged with monitoring the convention’s implementation “has made very clear that people with disabilities have been subject to egregious human rights violations as a result of legal capacity restrictions, and it made it very clear that, from a human rights perspective for the committee, states will have to do wholesale transformations of their substituted decision-making regimes in their home countries in order to usher in … regimes of supported decision-making. They rejected the notion that there are haves and have-nots with regard to legal capacity.”
Smith said U.N. member states might take a different approach for older adults, potentially prioritizing positive outcomes over optimizing for maximal rights preservation — a distinction that could shape how the international community balances autonomy with protections for aging populations.
Other Harvard speakers at the conference were I. Glenn Cohen, Petrie-Flom Center faculty director, James A. Attwood and Leslie Williams Professor of Law, and deputy dean of HLS; Susannah Baruch, executive director of the Petrie-Flom Center; Michael Ashley Stein, visiting professor at HLS and executive director of the HLS Project on Disability; Francis X. Shen, professor of law at the University of Minnesota and member of the Harvard Medical School Center for Bioethics; Abeer Malik, Petrie-Flom Center student fellow; and Diana Freed, assistant professor of computer and data science at Brown University and a visiting researcher at the Petrie-Flom Center.
Phil Capin, assistant professor of education, saw two research grants cut in May.Niles Singer/Harvard Staff Photographer
Nation & World
As reading scores decline, a study primed to help grinds to a halt
Partnership with Texas, Colorado researchers terminated as part of federal funding cuts targeting Harvard
Liz Mineo
Harvard Staff Writer
July 1, 2025
5 min read
Children who struggle w
As reading scores decline, a study primed to help grinds to a halt
Partnership with Texas, Colorado researchers terminated as part of federal funding cuts targeting Harvard
Liz Mineo
Harvard Staff Writer
5 min read
Children who struggle with reading often also have difficulty focusing, according to experts. Yet these students frequently receive ineffective support, with reading and attention difficulties addressed separately.
Intrigued by the possibility of helping students with reading and behavioral attention struggles, Harvard expert Phil Capin and his colleague Garrett Roberts from University of Denver designed a study to investigate the benefits of an integrated approach to intervention. The research project aimed to test the effects of a single, unified intervention called Supporting Attention and Reading for Kids (SPARK) on students in grades 3-5. Funded by the National Institutes of Health (NIH), Capin’s $3.2 million research grant started in July of last year.
The school-based research part of the project was set to begin in the fall — with the participation of about 400 students from six schools in Texas — in partnership with experts at the University of Denver and the University of Texas. Researchers were to track students for four years to determine if the intervention helped them improve in word reading, vocabulary, and reading fluency and comprehension.
But everything came to a stop when Capin’s project was terminated in May as part of the Trump administration’s decision to freeze more than $2.2 billion in federal research funding in its ongoing dispute against Harvard.
“The grants that were funded and then consequently terminated went through a really meticulous process. … Both projects had the potential to improve the lives of students.”
It is a blow to an important research agenda, said Capin, an assistant professor at the Harvard Graduate School of Education. But the biggest loss is for students who may have been helped with new research-informed practices, he added. Estimates suggest that 25 to 40 percent of students with reading difficulties experience elevated levels of inattention, according to differentstudies. In 1998 testimony before the Senate, leadership at NIH concluded that literacy difficulties in the U.S. amounted to a major public health problem.
“The need to improve reading instruction for students who are vulnerable for reading difficulties is not going away,” said Capin. “We’re committed to finding ways to continue the work, but how that occurs is unclear.”
Capin hopes that the research continues with the University’s support, or other funding agencies and private foundations. He remains optimistic.
“It’s unlikely that we will procure the amount of funds that were needed to conduct the research that we had designed, which was evaluated by our peers through a review process and determined to be innovative and significant,” Capin said. “I don’t think we’re going to be able to do the exact study that we had proposed, but we’re committed to finding solutions to advance this work so that we can improve outcomes for those we’re committed to help.”
Second project dealt setback
For Capin, the week of May 12 was a tough one. The same week he learned his SPARK project was terminated, another research grant of his was stopped before it began its second year. Called STORIES, the four-year project was to develop and evaluate a novel intervention to support multilingual students in grades 2-4 to better understand narrative texts.
The $2 million project was funded by the Institute of Education Sciences, the research arm of the Department of Education. The research was to be conducted in partnership with experts in speech and pathology at Utah State University, the University of Texas at Austin, and the Revere Public Schools in Massachusetts, which serves a large population of English learners.
“Many students who are multilingual are developing their proficiency in English,” said Capin. “Research suggests many of these students would benefit from additional supports to develop their academic language in English.”
Reading scores among U.S. students have been declining. According to the latest Nation’s Report Card, reading scores among fourth graders in 2024 were lower than in 2022 and even lower than in 2019. This project’s termination prevents students and teachers from working together to improve outcomes, Capin said.
“These decisions impact all the students who would have been served by these practices through the research, and also the countless teachers and students who could have potentially gained knowledge about new evidence-based practices,” he said.
Like other research grants that were frozen by the administration, Capin’s two research projects were funded based on careful peer reviews and a rigorous process. Capin said that the abrupt termination of these grants puts at risk the nation’s research enterprise, which should be kept independent from political pressure.
“Decisions about funding — whether to fund scientific research or whether to terminate scientific research — should be based on careful review, and on the merits of whether the research can improve the lives of individuals,” Capin said. “The grants that were funded and then consequently terminated went through a really meticulous process to determine whether the ideas were innovative and the methods were appropriate. Both projects had the potential to improve the lives of students. Even with these changes, our commitment to advance literacy outcomes for children remains strong.”
John C.P. Goldberg.Veasey Conway/Harvard Staff Photographer
Campus & Community
John C.P. Goldberg named Harvard Law School dean
Leading scholar in tort law and political philosophy has served as interim dean since March 2024
June 30, 2025
4 min read
John C.P. Goldberg, Carter Professor of General Jurisprudence, has been named the Morgan and Helen Chu Dean and Professor of Law of Harvard Law School. H
Leading scholar in tort law and political philosophy has served as interim dean since March 2024
4 min read
John C.P. Goldberg, Carter Professor of General Jurisprudence, has been named the Morgan and Helen Chu Dean and Professor of Law of Harvard Law School. He steps into the permanent role after serving as interim dean since March of last year.
“Throughout our search process, we sought a leader who could navigate today’s complex landscape and continue to build on the Law School’s academic strengths and impact. John is that leader,” said President Alan M. Garber. “He has an unwavering belief in excellence and inclusion, and the essential role that academic freedom plays in nurturing both of those aims. We are delighted that he will continue to lead and serve Harvard Law School.”
Known for his integrity, intellect, and effective leadership, Goldberg has held several administrative positions that have given him extensive institutional knowledge of HLS. He has been a faculty member since 2008, served as deputy dean from 2017 to 2022, and been a member and chair of HLS-specific committees, such as the Lateral Appointments Committee.
In addition to his service to HLS, Goldberg has contributed to the University broadly throughout his tenure at Harvard. He has advised on an array of issues, serving as a member of committees such as the Provost’s Advisory Committee, the University Discrimination and Harassment Policy Steering Committee, and as chair of the Electronic Communications Policy Oversight Committee.
“I am deeply grateful for this opportunity to serve the students, faculty, staff, and graduates of Harvard Law School, particularly at a moment in which law and legal education are so salient,” Goldberg said. “Working together, we will continue to advance our understanding of the law, and to explore how it can best serve constitutional democracy, the rule of law, and the bedrock American principle of liberty and equal justice for all. In doing so, we will build on the best traditions of this great institution and our profession: rigorous inquiry and instruction, open and reasoned discourse, and conscientious and vigorous advocacy.”
Goldberg has published numerous works, ranging from textbooks to scholarly articles. An expert in tort law, Goldberg was the editor in chief of the Journal of Tort Law from 2009 to 2015 and remains a member of its editorial board. He also co-authored a leading casebook, “Tort Law: Responsibilities and Redress,” and “The Oxford Introductions to U.S. Law: Torts.” Goldberg is currently the co-editor in chief of the Journal of Legal Analysis and an editorial board member of the journal Legal Theory.
Along with his frequent co-author, Professor Benjamin Zipursky, Goldberg was recognized consecutively by the Association of American Law Schools with the Section on Torts and Compensation Systems William L. Prosser Award in 2023 and the Section on Jurisprudence Hart-Dworkin Award in Legal Philosophy in 2024. Their co-authored Harvard University Press book on the vital role of tort law in the legal system, “Recognizing Wrongs,” was given the Civil Justice Scholarship Award by the National Civil Justice Institute in 2023.
“I am delighted that John Goldberg will be the dean of Harvard Law School,” said Provost John F. Manning. “He cares deeply about the legal profession and about Harvard Law School, and he approaches everything he does with integrity, humility, and wisdom. It has been an honor to work closely with him over many years, and I know that he will be a superb dean.”
Before arriving at Harvard, Goldberg taught at Vanderbilt University Law School, where he held the role of associate dean for research from 2005 to 2008. Early in his career, he was a clerk to Justice Byron R. White on the Supreme Court and to Judge Jack B. Weinstein in the Eastern District of New York, and was an associate at the Boston firm Hill and Barlow.
Goldberg earned his B.A. from Wesleyan University with high honors. Additionally, he holds an M.Phil. in politics from Oxford University and an M.A. in politics from Princeton University. He earned his J.D. from New York University School of Law, where he held the position of editor in chief at the NYU Law Review.
Health
Who decides when doctors should retire?
Liz Mineo
Harvard Staff Writer
June 30, 2025
4 min read
Expert in law, bioethics sees need for cognitive testing amid graying of nation’s physician workforce
As the national physician workforce gets older, concerns about cognitive decline among doctors are increasing, highlighting the need for testing late-career practitioners, said Sharona
Expert in law, bioethics sees need for cognitive testing amid graying of nation’s physician workforce
As the national physician workforce gets older, concerns about cognitive decline among doctors are increasing, highlighting the need for testing late-career practitioners, said Sharona Hoffman, a specialist in law and bioethics, at a recent Harvard Law School panel.
Hoffman, who teaches at Case Western Reserve University School of Law, spoke at a conference on law, healthcare, and aging sponsored by the Petrie-Flom Center. The event covered topics that included challenges to healthcare systems in adapting to patients with increased longevity, older adults and issues of discrimination, protection, and paternalism, and technology and commercialization in aging.
“Cognitive decline in the physician workforce is a problem, and it’s a problem that has come to the attention of healthcare organizations,” said Hoffman.
Yale New Haven Hospital tested 141 clinicians who were 70 and older between October 2016 and January 2019 and found 12 percent had cognitive deficits that could affect job performance, said Hoffman.
12 percentOf tested clinicians 70 and older were found to have cognitive deficits.
Nationwide, a large number of doctors practice beyond typical retirement age. Hoffman cited a report by the Association of American Medical Colleges, which found that in 2024, 20 percent of working physicians were 65 and older, and 22 percent were between 55 and 64 years old.
Cognitive decline often results from brain changes caused by the narrowing or blockage of arteries by atherosclerotic plaque, which starts developing around age 60. Some of its signs are slow processing speed, difficulties recalling words or names, and concentration and attention problems.
Veteran professionals may be at risk of cognitive decline and should be tested to protect both patients and doctors, said Hoffman.
But a testing program’s implementation should be done with care, said Hoffman, because it could exacerbate the nation’s physician shortage. In the same report, the AAMC predicted the country will face a shortage of up to 86,000 physicians by 2036.
Employers who might want to establish a testing program for late-career practitioners should also be aware of ethical obligations and legal implications regarding age and disability discrimination, said Hoffman.
State medical boards, which are in charge of protecting public welfare and implementing license renewal procedures, could play a role in identifying clinicians with cognitive decline, said Hoffman, but they would have to include due process protections.
“The state medical boards could use experts and figure out the right kind of test and the right cut-off score,” Hoffman said. “I’m assuming there would be a lot of resistance to any kind of testing program at all, but hopefully we could convince people that actually this is in their best interest. It is meant to protect them and make sure that their career doesn’t end in disaster.”
In another talk, Alessandro Blassime, lecturer at the Department of Health Sciences and Technology at ETH Zurich, spoke about the challenges that increased life expectancy pose to healthcare providers and the allocation of health resources.
“We are all perfectly aware of the fact that life expectancy is on the rise across the globe,” said Blassime. “This is a phenomenon that has been going on for quite some time, and there are indications that it’s not going to stop, at least not in the next couple of decades, which increases the burden of age-related diseases and makes it particularly challenging for healthcare systems to cope with that.”
With the arrival of the concept of biological age in the medical sphere, there has been a shift in how experts define health and longevity, said Blassime. People age at different rates, with some remaining healthy and active well into old age while others become frail and develop health conditions that can shorten their life span.
Biological age reflects the body’s actual health condition and is affected by genetics, lifestyle, and environment. Experts see it as a more accurate measure of aging than chronological age, which only refers to a person’s age.
Unlike chronological age, which cannot be changed, biological age can be altered by changes in diet, exercise, stress management, sleep quality, and other healthy behaviors.
“Biological age describes the difference between the expected and the actual state of a person,” said Blassime.
In his remarks, Blassime raised concerns over the use of biological age as it becomes more widespread in an effort to prioritize healthspan over lifespan.
“We need to understand that biological age is something that can be tempting to use, but biological age models, like any other predictive models, may reproduce or amplify biases in the data that we use to create them,” said Blassime. “Disadvantaged people are more likely to have higher biological ages than others … And there are possible misguided uses of biological age, for example, using this criterion to rush to interventions that are not proven to slow down aging.”
Health
Unlocking the promise of CAR-T
Alvin Powell
Harvard Staff Writer
June 30, 2025
long read
Research across multiple fronts seeks to expand impact of a cancer therapy that has left patients and doctors awestruck
David Avigan doesn’t like to use the word “miracle” to describe CAR-T-cell therapy, but he knows what people mean when they do. He also remembers the first time it happened
Research across multiple fronts seeks to expand impact of a cancer therapy that has left patients and doctors awestruck
David Avigan doesn’t like to use the word “miracle” to describe CAR-T-cell therapy, but he knows what people mean when they do. He also remembers the first time it happened with one of his patients.
“As a field, we’re always a little cautious — our patients are on a roller coaster,” said Avigan, director of the Cancer Center at Beth Israel Deaconess and the Theodore W. and Evelyn G. Berenson Professor of Medicine for the Study of Oncology at Harvard Medical School. “But frankly, we’ve seen very dramatic responses in patients with advanced disease.”
First approved by the FDA in 2017, CAR-T-cell therapy enlists the body’s immune system in the fight against cancer. It has triggered rapid improvement in some of the sickest patients, those whose hopes had faded with the failure of one treatment after another. Physicians report astonishing results: tumors melting away over weeks or even just days and people who appeared to be on death’s door getting up and reclaiming their lives.
“They had received every other known therapy and experimental therapy — nothing worked — and after CAR-T therapy they would go into remission and just a week later, be walking around like they were totally normal, even if they got really sick during the therapy,” said Robbie Majzner, an associate professor of pediatrics at the Medical School and director of the Pediatric and Young Adult Cancer Cell Therapy Program at the Dana-Farber/Boston Children’s Cancer and Blood Disorders Center. “We had one patient who almost died during the therapy and two weeks later he was leukemia-free and snowboarding. It was just unbelievable.”
Eric Smith, an assistant professor of medicine at the Medical School and Dana-Farber’s director of translational research, immune effector cell therapies, describes CAR-T as a “platform” rather than a specific treatment. Its weapon is one of the body’s most potent fighters — the T-cell, refined over millions of years to destroy bacteria, viruses, fungi, and other invaders. The CAR, or chimeric antigen receptor, is the T-cell’s targeting system, which can be tuned by bioengineers to different targets. That tuning ability allowed Smith and others to engineer a therapy originally effective against leukemia and lymphoma into a weapon against a third blood cancer, multiple myeloma. It also underlies excitement around the therapy’s potential to treat not just other cancers but also noncancerous conditions such as autoimmune diseases and chronic infections.
“The platform is just so amenable to further engineering — the different things we can do to increase efficacy — that we’re very enthusiastic,” Smith said. “We’ll be curing a higher percentage of patients with the next iteration.”
Eric Smith said he’s seen many amazing recoveries. One of his first was the second patient ever to receive the CD-19 directed CAR-T-cell therapy, during his oncology fellowship at the Memorial Sloan Kettering Cancer Center in New York City.
“We were shocked to see such an amazing clinical response,” Smith said. “We published a case report of a patient who had such rapidly progressive myeloma that in between starting on conditioning therapy and getting her CAR-T-cells, she developed paralysis from the myeloma in her spine. Then, with the CAR-T-cells, she was able to recover from that and do well for over a year. So yeah, that’s one of the things that does really make it different from other therapies.”
Veasey Conway/Harvard Staff Photographer
Prior to the development of CAR-T and other immunotherapies, cancer cells were protected from immune attack because they arise from the patient’s own tissues and aren’t recognized as an invader by the immune system.
In CAR-T therapy, physicians extract T-cells from the patient and send them to a cell manufacturing facility, where the CAR is added to the cell surface. The engineered cells are multiplied, frozen, and shipped back to the medical facility to be infused into the patient. Once in the body, the CAR attaches to a molecule — the antigen — on the cancer cell surface, signaling that the T-cell should attack.
How it works
T-cells are collected from a patient.
Source: Leukemia and Lymphoma Society; illustrations by Judy Blomquist/Harvard Staff
T-cells are genetically engineered to produce chimeric antigen receptors, or CARs.
CAR-T-cells are able to recognize and kill cancerous cells and help guard against recurrence.
The re-engineered cells are multiplied until there are millions of these attacker cells.
CAR-T-cells are infused back into the patient.
Inside the body, the CAR attaches to an antigen on the surface of a cancer cell.
The CAR signals the T-cell to release powerful cytotoxins, causing cell death.
The therapy has been most effective against leukemia, lymphoma, and myeloma, which together accounted for 9 percent of U.S. cancer cases last year, affecting 187,000 Americans. Efforts to extend these gains to solid tumors are an important new frontier, specialists say, and a demonstration of the slow but steady momentum that characterizes scientific progress.
Among the most serious obstacles, along with the challenges presented by solid tumors, is the reality that CAR-T doesn’t work for everyone. Remission rates are currently between 50 and 90 percent, depending on the condition, and cancer is still the largest single cause of mortality for those undergoing the treatment.
“I wish I could say all cell therapy is curative for everybody,” said Marcela Maus, a professor at the Medical School and director of the Cellular Immunotherapy Program at Mass General. “We’re not there, but it has that potential and possibility for some patients. Cells are alive and they can have their own ‘thermostat’ for when they need to grow or when they need to shrink in response to something they sense in their environment. That really opens up the possibilities of regenerative medicine or single-shot cures that are very difficult to achieve with other modalities.”
Marcela Maus.
Kate Flock/Massachusetts General Hospital
One of Maus’ primary targets is the solid tumor problem, which is created in part by cell diversity, denser mass, and threats to the survival of CAR-T cells in the hostile immune environment around the malignancy. Last year, she teamed up with Harvard neurosurgeon Bryan Choi to test the therapy against glioblastoma, a devastating brain tumor that kills more than 90 percent of patients within five years. They tackled the diversity issue by engineering a two-pronged CAR that targeted two molecules typically found on the surface of different cancer cells, broadening the T-cells’ attack. The first three participants in the study saw dramatic improvements in their cancer, though the effects were durable in only one. Follow-up data on a total of 10 patients was presented at the American Society of Clinical Oncology this month.
Other research teams have reported mixed results in trials involving lung, prostate, ovarian, and gastric tumors.
“There’s been a lot of hope for the technology to be able to dramatically improve the lives of patients with other diseases for a long time,” said Maus. “That part is a little bit early and it’s been less straightforward to achieve.” Nonetheless, she’s seen enough to feel confident in “significant promise” for advances that bolster attacks against a wider range of cancers.
Another major concern with CAR-T are the side effects, whose severity — a 2024 study blamed side effects for almost 12 percent of non-cancer deaths among CAR-T patients — can cause some doctors to hesitate before recommending the treatment. One of the body’s responses to the therapy is to overreact, releasing proteins called cytokines that amplify the immune signal. Cytokine release syndrome can be severe, with nausea, vomiting, rapid heartbeat, and hallucinations among the symptoms. Another dangerous side effect is neurotoxicity syndrome, in which the heightened immune response affects the brain, causing temporary confusion in mild cases and coma in serious ones.
For patients who have exhausted other options, CAR-T-cell therapy, even with the side effects, is almost always worth trying, Avigan said. For those not so far along, the decision becomes murkier, but several studies suggest that CAR-T interventions offer advantages over standard therapy, he said. One powerful argument for earlier CAR-T is that repeated rounds of chemotherapy take a serious toll on the body, leaving late-stage patients with less effective T-cells. Moving CAR-T from a last resort to a second-line treatment has the potential to exploit T-cells that are more energetic and effective at clearing cancer from the body.
David Avigan, who’s treated hundreds of CAR-T-cell patients since the therapy first became available in 2017:
“There’s no question that for those of us who are in the middle of it, the transformative nature of this makes it an incredible privilege to take care of patients and help them in ways we couldn’t before. I’ve taken care of a lot of patients with CARs and have seen all parts of that spectrum.”
Stephanie Mitchell/Harvard Staff Photographer
“It’s not that this is a perfect therapy for every single person no matter what’s going on,” Avigan said. “But yes, you can take a step back and say, ‘Wow, we’re in a different place than we were 10 years ago.’”
Building blocks of a breakthrough
1987: Yoshikazu Kurosawa first describes the idea of modified CAR-T-cells that could target specific cancer cells.
1989-1993: First-generation CAR-T-cell therapies are designed by Zelig Eshhar and Gideon Gross, but the modified cells don’t survive long in the body and prove ineffective in clinical trials.
1998-2003: Michel Sadelain introduces a co-stimulatory molecule into CAR-T-cells, allowing them to remain active in the body. The team also modifies CAR-T-cells to target a protein on the surface of malignant cells in conditions like leukemia and lymphoma.
2011: Three adult patients with advanced chronic lymphocytic leukemia achieve complete or partial remission after receiving specific CAR-T-cell therapy.
2012: Six-year-old Emily Whitehead becomes the first pediatric patient to receive CAR-T-cell therapy. She recovered and is still thriving. AP photo
2017: First CAR-T-cell therapies are approved by the FDA. More are approved in the following years.
2024: Combining two strategies, CAR-T and bispecific antibodies called T-cell engaging antibody molecules, a Mass General Brigham group achieves dramatic regression in three glioblastoma patients. The work is particularly important because tumors can be more complicated to treat than blood cancers.
2025: A study finds that one-third of 97 patients with multiple myeloma, once considered incurable, remain alive and progression-free five years after CAR-T-cell treatment.
Over the next 10 years, CAR-T investigators hope to weaken or remove barriers to the therapy’s effectiveness as a cancer fighter — including cost — while also testing its potential against noncancerous conditions.
Mohammad Rashidian, a researcher and faculty member at Dana-Farber, has developed an “enhancer protein” designed to address two of CAR-T’s shortcomings: T-cell exhaustion that leads to a weak initial response, and responses that fade over time, as can happen in myeloma cases.
The enhancer protein, described in a study published a year ago, links the CAR to an immune system signaling molecule called IL-2. The molecule increases T-cell activity and promotes the development of memory CAR-T-cells, which have the potential to provide protection against cancer the way infection or vaccination might against infectious disease.
Mohammad Rashidian.
Niles Singer/Harvard Staff Photographer
“I’m very optimistic,” Rashidian said. “The data that we have is really beyond what we had expected. You get better-quality T-cells and that translates to much better tumor clearance. If it replicates in patients, we would anticipate substantially better responses and hopefully a lot of patients should be cured.”
CAR-T’s potential extends to autoimmune diseases such as lupus, in which the immune system launches misdirected attacks on the body. For this version of the therapy, Maus said, the CAR is engineered to direct the CAR-T cell to attack B-cells, responsible for the autoimmune attack in lupus. The therapy seems to trigger a reboot of the immune system, she added, which curbs the B-cell attack in the months after treatment.
The cause of that reset is unknown, Maus said, but scientists are increasingly interested in wielding the therapy against other autoimmune conditions, including Type 1 diabetes. “There’s a whole group of autoimmune diseases where this could potentially have a really significant therapeutic benefit,” she said.
The benefits of CAR-T come at a hefty price, with a single cancer treatment costing upward of $400,000. Researchers are hopeful that refinements in care and drug development, among other advances, will act as a counterforce. Smith, of Dana-Farber, is particularly interested in the possibility of removing the cell manufacturing facility from the production cycle.
A key step in the bioengineering process is the vector’s delivery of the CAR’s gene to the T-cell. The cell uses those instructions to create the CAR on its surface before it’s infused back into the patient. Today, that step takes place in a facility, but Smith and others are working on a process that would inject the vector carrying the CAR gene directly into the patient. The vector would deliver the genetic instructions for the CAR to the T-cell while still in the body, triggering the T-cell to produce the appropriate CAR on the cell surface, all without a stop in a facility.
The idea still faces significant hurdles, including potential off-target effects should the vector deliver to cells other than T-cells, but Smith says it could be a game-changer, simplifying an arduous process for the patient and driving down costs.
Majzner, of Boston Children’s, is working in a landscape different from that of adult cancers, in part because drugmakers don’t want to limit themselves to medicines that treat only pediatric cases, which are significantly fewer. His answer is a CAR that targets a molecule found on cells in several cancer types. Called B7-H3, it would allow drugmakers to create CAR-T cells for pediatric patients with a range of cancers, increasing the patient population and possibly sparking more interest in development of therapies for pediatric patients.
Robbie Majzner, on how seeing impacts of CAR-T firsthand shifted his path:
“Before that, I always thought of lab research like rocket science, a bunch of pathways I didn’t want to memorize and far from the patient. Then, to have been working in Building 10 at the NIH where literally the therapy had been developed and then directly put into patients, you realize how close those things are. That really drove me to get involved in research. It wouldn’t have gone that way had I not seen those types of responses.”
Niles Singer/Harvard Staff Photographer
“We’ll have trials for that in solid tumors and in brain tumors,” Majzner said. “Perhaps it will leapfrog ahead for patients that now receive therapies for pediatric solid tumors that look like they did 40 years ago. Success would change the way we treat these cancers for sure.”
It would also further inspire specialists who have repeatedly come face to face with the power of CAR-T over the past several years. As Avigan put it: “There’s no question that for those of us who are in the middle of it, the transformative nature of this makes it an incredible privilege to take care of patients and help them in ways we couldn’t before.”
Harvard University.Photo by Grace DuVal
Campus & Community
Federal judge blocks Trump plan to ban international students at Harvard
Ruling notes administration action raises serious constitutional concerns
Christina Pazzanese
Harvard Staff Writer
June 30, 2025
3 min read
A federal judge in Boston has blocked a Trump administration plan to bar foreign students and scholars from enterin
A federal judge in Boston has blocked a Trump administration plan to bar foreign students and scholars from entering the U.S. to study or work at Harvard.
U.S. District Court Judge Allison D. Burroughs granted the University’s request for a preliminary injunction on June 23, finding that the administration’s actions were likely illegal and raised serious constitutional concerns.
Burroughs wrote, “This case is about core constitutional rights that must be safeguarded: freedom of thought, freedom of expression, and freedom of speech, each of which is a pillar of a functioning democracy and an essential hedge against authoritarianism.”
The ruling extends a temporary order issued June 5, one day after a proclamation by President Trump declaring that the federal government would deny visas to international students headed to Harvard.
Trump cited national security concerns, accusing the University of failing to turn over records about its approximately 7,000 international students and recent graduates to the U.S. Department of Homeland Security (DHS), a claim University officials have forcefully denied.
Burroughs admonished DHS and other federal agencies, including Immigration and Customs Enforcement (ICE), the Department of Justice, and the State Department, for taking such an abrupt action with “little thought” to the ramifications it will have on international students or the country.
The government’s “misplaced efforts to control a reputable academic institution and squelch diverse viewpoints” because they may differ from the Trump administration’s “threaten these rights,” the judge concluded in the 44-page memorandum and order.
On June 20, Burroughs issued a preliminary injunction enjoining DHS, ICE, and other agencies from revoking Harvard’s participation in the Student and Exchange Visitor Program.
DHS had moved to pull the University’s certification in May, saying that Harvard had failed to turn over records of student visa holders, a claim that University officials have denied.
The exchange program, overseen by DHS and ICE, collects information about those wishing to study in the U.S. to ensure they’re legitimate students and grants schools permission to host visa-holding citizens of other nations.
Since taking office in January, the Trump administration has frozen more than $3 billion in grants and contracts with Harvard. Officials made a series of demands that include “audits” of academic programs and departments, along with the viewpoints of students, faculty, and staff, and changes to the University’s governance structure and hiring practices.
The University has filed two civil lawsuits alleging the government’s actions against Harvard are unlawful and retaliatory and violate the University’s constitutional rights.
The administration notified the U.S. First Circuit Court of Appeals on June 27 that it plans to file an appeal.
Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without
Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without combining data types.
However, capturing both the “visuals and sound” of biological data, such as gene expression and cell structure data, from the same cells requires researchers to develop new approaches. They also have to make sure that the data they capture accurately reflects what happens in living organisms, including how cells interact with each other and their environments.
Whitehead Institute for Biomedical Research and Harvard University researchers have taken on these challenges and developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes such as turning off individual genes affect both gene expression and cell structure in intact liver tissue. The method, described in Cell on June 12, aims to accelerate discovery of how genes control organ function and disease.
The research team, led by Whitehead Institute Member Jonathan Weissman and then-graduate student in his lab Reuben Saunders, along with Xiaowei Zhuang, the David B. Arnold Professor of Science at Harvard University, and then-postdoc in her lab Will Allen, created a system that can test hundreds of different genetic modifications within a single mouse liver while capturing multiple types of data from the same cells.
“Understanding how our organs work requires looking at many different aspects of cell biology at once,” Saunders says. “With Perturb-Multi, we can see how turning off specific genes changes not just what other genes are active, but also how proteins are distributed within cells, how cellular structures are organized, and where cells are located in the tissue. It’s like having multiple specialized microscopes all focused on the same experiment.”
“This approach accelerates discovery by both allowing us to test the functions of many different genes at once, and then for each gene, allowing us to measure many different functional outputs or cell properties at once — and we do that in intact tissue from animals,” says Zhuang, who is also a Howard Hughes Medical Institute (HHMI) investigator.
A more efficient approach to genetic studies
Traditional genetic studies in mice often turn off one gene and then observe what changes in that gene’s absence to learn about what the gene does. The researchers designed their approach to turn off hundreds of different genes across a single liver, while still only turning off one gene per cell — using what is known as a mosaic approach. This allowed them to study the roles of hundreds of individual genes at once in a single individual. The researchers then collected diverse types of data from cells across the same liver to get a full picture of the consequences of turning off the genes.
“Each cell serves as its own experiment, and because all the cells are in the same animal, we eliminate the variability that comes from comparing different mice,” Saunders says. “Every cell experiences the same physiological conditions, diet, and environment, making our comparisons much more precise.”
“The challenge we faced was that tissues, to perform their functions, rely on thousands of genes, expressed in many different cells, working together. Each gene, in turn, can control many aspects of a cell’s function. Testing these hundreds of genes in mice using current methods would be extremely slow and expensive — near impossible, in practice.” Allen says.
Revealing new biology through combined measurements
The team applied Perturb-Multi to study genetic controls of liver physiology and function. Their study led to discoveries in three important aspects of liver biology: fat accumulation in liver cells — a precursor to liver disease; stress responses; and hepatocyte zonation (how liver cells specialize, assuming different traits and functions, based on their location within the liver).
One striking finding emerged from studying genes that, when disrupted, cause fat accumulation in liver cells. The imaging data revealed that four different genes all led to similar fat droplet accumulation, but the sequencing data showed they did so through three completely different mechanisms.
“Without combining imaging and sequencing, we would have missed this complexity entirely,” Saunders says. “The imaging told us which genes affect fat accumulation, while the sequencing revealed whether this was due to increased fat production, cellular stress, or other pathways. This kind of mechanistic insight could be crucial for developing targeted therapies for fatty liver disease.”
The researchers also discovered new regulators of liver cell zonation. Unexpectedly, the newly discovered regulators include genes involved in modifying the extracellular matrix — the scaffolding between cells. “We found that cells can change their specialized functions without physically moving to a different zone,” Saunders says. “This suggests that liver cell identity is more flexible than previously thought.”
Technical innovation enables new science
Developing Perturb-Multi required solving several technical challenges. The team created new methods for preserving the content of interest in cells — RNA and proteins — during tissue processing, for collecting many types of imaging data and single-cell gene expression data from tissue samples that have been fixed with a preservative, and for integrating multiple types of data from the same cells.
“Overcoming the inherent complexity of biology in living animals required developing new tools that bridge multiple disciplines — including, in this case, genomics, imaging, and AI,” Allen says.
The two components of Perturb-Multi — the imaging and sequencing assays — together, applied to the same tissue, provide insights that are unattainable through either assay alone.
“Each component had to work perfectly while not interfering with the others,” says Weissman, who is also a professor of biology at MIT and an HHMI investigator. “The technical development took considerable effort, but the payoff is a system that can reveal biology we simply couldn’t see before.”
Expanding to new organs and other contexts
The researchers plan to expand Perturb-Multi to other organs, including the brain, and to study how genetic changes affect organ function under different conditions like disease states or dietary changes.
“We’re also excited about using the data we generate to train machine learning models,” adds Saunders. “With enough examples of how genetic changes affect cells, we could eventually predict the effects of mutations without having to test them experimentally — a ‘virtual cell’ that could accelerate both research and drug development.”
“Perturbation data are critical for training such AI models and the paucity of existing perturbation data represents a major hindrance in such ‘virtual cell’ efforts,” Zhuang says. “We hope Perturb-Multi will fill this gap by accelerating the collection of perturbation data.”
The approach is designed to be scalable, with the potential for genome-wide studies that test thousands of genes simultaneously. As sequencing and imaging technologies continue to improve, the researchers anticipate that Perturb-Multi will become even more powerful and accessible to the broader research community.
“Our goal is to keep scaling up. We plan to do genome-wide perturbations, study different physiological conditions, and look at different organs,” says Weissman. “That we can now collect so many types of data from so many cells, at speed, is going to be critical for building AI models like virtual cells, and I think it’s going to help us answer previously unsolvable questions about health and disease.”
Whitehead Institute and Harvard researchers developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes, such as turning off individual genes, affect both gene expression and cell structure in intact liver tissue.
As an electrical engineering student at Stanford University in the late 1970s, L. Rafael Reif was working on not only his PhD but also learning a new language.“I didn’t speak English. And I saw that it was easy to ignore somebody who doesn’t speak English well,” Reif recalled. To him, that meant speaking with conviction.“If you have tremendous technical skills, but you cannot communicate, if you cannot persuade others to embrace that, it’s not going to go anywhere. Without the combination, you c
As an electrical engineering student at Stanford University in the late 1970s, L. Rafael Reif was working on not only his PhD but also learning a new language.
“I didn’t speak English. And I saw that it was easy to ignore somebody who doesn’t speak English well,” Reif recalled. To him, that meant speaking with conviction.
“If you have tremendous technical skills, but you cannot communicate, if you cannot persuade others to embrace that, it’s not going to go anywhere. Without the combination, you cannot persuade the powers-that-be to embrace whatever ideas you have.”
Now MIT president emeritus, Reif recently joined Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering (SoE), for a fireside chat. Their focus: the importance of developing engineering leadership skills — such as persuasive communication — to solve the world’s most challenging problems.
SoE’s Technical Leadership and Communication Programs (TLC) sponsored the chat. TLC teaches engineering leadership, teamwork, and technical communication skills to students, from undergrads to postdocs, through its four programs: Undergraduate Practice Opportunities Program (UPOP), Gordon-MIT Engineering Leadership Program (GEL), Communication Lab (Comm Lab), and Riccio-MIT Graduate Engineering Leadership Program (GradEL).
About 175 students, faculty, and guests attended the fireside chat. Relaxed, engaging, and humorous — Reif shared anecdotes and insights about technical leadership from his decades in leadership roles at MIT.
Reif had a transformational impact on MIT. Beginning as an assistant professor of electrical engineering in 1980, he rose to head of the Department of Electrical Engineering and Computer Science (EECS), then served as provost from 2005 to 2012 and MIT president from 2012 to 2022.
He was instrumental in creating the MIT Schwarzman College of Computing in 2018, as well as establishing and growing MITx online open learning and MIT Microsystems Technology Laboratories.
With an ability to peer over the horizon and anticipate what’s coming, Reif used an array of leadership skills to develop and implement clear visions for those programs.
“One of the things that I learned from you is that as a leader, you have to envision the future and make bets,” said Chandrakasan. “And you don’t just wait around for that. You have to drive it.”
Turning new ideas into reality often meant overcoming resistance. When Reif first proposed the College of Computing to some fellow MIT leaders, “they looked at me and they said, no way. This is too hard. It’s not going to happen. It’s going to take too much money. It’s too complicated. OK, then starts the argument.”
Reif seems to have relished “the argument,” or art of persuasion, during his time at MIT. Though hearing different perspectives never hurt.
“All of us have blind spots. I always try to hear all points of view. Obviously, you can’t integrate all of it. You might say, ‘Anantha, I heard you, but I disagree with you because of this.’ So, you make the call knowing all the options. That is something non-technical that I used in my career.”
On the technical side, Reif’s background as an electrical engineer shaped his approach to leadership.
“What’s beautiful about a technical education is that you understand that you can solve anything if you start with first principles. There are first principles in just about anything that you do. If you start with those, you can solve any problem.”
Also, applying systems-level thinking is critical — understanding that organizations are really systems with interconnected parts.
“That was really useful to me. Some of you in the audience have studied this. In a system, when you start tinkering with something over here, something over there will be affected. And you have to understand that. At a place like MIT, that’s all the time!”
Reif was asked: If he were assembling a dream team to tackle the world’s biggest challenges, what skills or capabilities would he want them to have?
“I think we need people who can see things from different directions. I think we need people who are experts in different disciplines. And I think we need people who are experts in different cultures. Because to solve the big problems of the planet, we need to understand how different cultures address different things.”
Reif’s upbringing in Venezuela strongly influenced his leadership approach, particularly when it comes to empathy, a key trait he values.
“My parents were immigrants. They didn’t have an education, and they had to do whatever they could to support the family. And I remember as a little kid seeing how people humiliated them because they were doing menial jobs. And I remember how painful it was to me. It is part of my fabric to respect every individual, to notice them. I have a tremendous respect for every individual, and for the ability of every individual that didn’t have the same opportunity that all of us here have to be somebody.”
Reif’s advice to students who will be the next generation of engineering leaders is to keep learning because the challenges ahead are multidisciplinary. He also reminded them that they are the future.
“What are our assets? The people in this room. When it comes to the ecosystem of innovation in America, what we work on is to create new roadmaps, expand the roadmaps, create new industries. Without that, we have nothing. Companies do a great job of taking what you come up with and making wonderful things with it. But the ideas, whether it’s AI, whether it’s deep learning, it comes from places like this.”
L. Rafael Reif (left), with Anantha P. Chandrakasan, shared anecdotes and insights about technical leadership from his decades in leadership roles at MIT at the Technical Leadership and Communication Programs Fireside Chat.
Professors Xiao Wang and Rodrigo Verdi, both members of the 2023-25 Committed to Caring cohort, are aiding in the development of extraordinary researchers and contributing to a collaborative culture. “Professor Xiao Wang's caring efforts have a profound impact on the lives of her students,” one of her advisees commended.“Rodrigo's dedication to mentoring and his unwavering support have positively impacted every student in our group,” another student praised.For MIT graduate students, the Committ
Professors Xiao Wang and Rodrigo Verdi, both members of the 2023-25 Committed to Caring cohort, are aiding in the development of extraordinary researchers and contributing to a collaborative culture.
“Professor Xiao Wang's caring efforts have a profound impact on the lives of her students,” one of her advisees commended.
“Rodrigo's dedication to mentoring and his unwavering support have positively impacted every student in our group,” another student praised.
For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.
Xiao Wang: Enriching, stimulating, and empowering students
Xiao Wang is a core institute member of the Broad Institute of MIT and Harvard and an associate professor in the Department of Chemistry at MIT. She started her lab in 2019 to develop and apply new chemical, biophysical, and genomic tools to better understand tissue function and dysfunction at the molecular level.
Wang goes above and beyond to create a nurturing environment that fosters growth and supports her students' personal and academic development. She makes it a priority to ensure an intellectually stimulating environment, taking the time to discuss research interests, academic goals, and personal aspirations on a weekly basis.
In their nominations, her students emphasized that Wang understands the importance of mentorship, patiently explaining fundamental concepts, sharing insights from her own groundbreaking work, and providing her students with key scientific papers and resources to deepen their understanding of the field.
“Professor Wang encouraged me to think critically, ask challenging questions, and explore innovative approaches to further my research,” one of her students commented.
Beyond the lab, Wang nurtures a sense of community among her research team. Her regular lab meetings are highly valued by her students, where “fellow researchers presented … findings, exchanged ideas, and received constructive feedback.”
These meetings foster collaboration, enhance communication skills, and create a supportive environment where all lab members feel empowered to share their discoveries and insights.
Wang is a dedicated and compassionate educator, and is known for her unwavering commitment to the well-being and success of her students. Her advisees not only excel academically but they also develop resilience, confidence, and a sense of belonging.
A different student reflected that although they came from an organic chemistry background with few skills related to the chemical biology field, Wang recognized their enthusiasm and potential. She went out of her way to make sure they could have a smooth transition. “It is because of all her training and help that I came from knowing nothing about the field to being able to confidently call myself a chemical biologist,” the student acclaimed.
Her advisees communicate that Wang encourages them to present their work at conferences, workshops, and seminars. This helps boost the students’ confidence and establish connections within the scientific community.
“Her genuine care and dedication make her a cherished mentor and a source of inspiration for all who have the privilege to learn from her,” one of her mentees remarked.
Rodrigo Verdi: Committed and collaborative
Professor Rodrigo Verdi is the deputy dean of degree programs and teaching and learning at the MIT Sloan School of Management. Verdi’s research provides insights into the role of accounting information in corporate finance decisions and in capital markets behavior.
Professor Verdi has been active in the majority of the Sloan students’ research journeys. He makes sure to assist students even if he does not directly guide them. One student states that “although Rodrigo is not my primary advisor, he still goes above and beyond to provide feedback and assistance.”
Verdi believes that “an appetite for experimentation, the ability to handle failure, and managing the stress along the way” is the kind of support necessary for especially innovative research.
Another student recounts that they “cannot think of a single recent graduate since … [they] started the PhD program that did not have Rodrigo on their committee.” This demonstrates how much students value his guidance, and how much he cares about their success.
Since his arrival at MIT, he has shown a strong commitment to mentoring students. Despite his many responsibilities as an associate dean, Rodrigo remains highly accessible to students and eagerly engages with them.
Specifically, Verdi has interacted with more than 90 percent of recent graduates over the past 10 years, contributing significantly to the department’s strong track record in job placements. He has served on the dissertation committee for 18 students in the last 15 years, which represents nearly all of the students in the department.
A student remarked that “Rodrigo has been an exceptional advisor during my job market period, which is known for its high levels of stress.” He offered continuous encouragement and support, making himself available for discussions whenever the student faced challenges.
After each job market interview, Verdi and the student would debrief and discuss areas for improvement. His insights into the academic system, the significance of social skills and networking, and his valuable advice helped the student successfully get a faculty position.
Rodrigo’s mantra is, “people won't care how much you know until they know how much you care,” and his relationships with his students support this maxim.
Verdi has made a lasting impact on the culture of the accounting specialty and is an important piece of the puzzle with regard to interactions found in the Sloan school. One of his students praised, “the collaborative culture is impressive: I’d call it a family, where faculty and students are very close to each other.” They described that they “share the same office space, have lunches together, and whenever students want feedback, the faculty is willing to help.”
Verdi has sharp research insights, and always wants to help, even when he is swamped with administrative affairs. He makes himself accessible to students, often staying after hours with his door open.
Another mentee said that “he has been organizing weekly PhD lunch seminars for years, online brown-bags among current and previous MIT accounting members during the pandemic, and more recently the annual MIT accounting alumni conference.” Verdi also takes students out for dinner or coffee, caring about how they are doing outside of academics. The student commended, “I feel lucky that Rodrigo is here.”
Several researchers have taken a broad view of scientific progress over the last 50 years and come to the same troubling conclusion: Scientific productivity is declining. It’s taking more time, more funding, and larger teams to make discoveries that once came faster and cheaper. Although a variety of explanations have been offered for the slowdown, one is that, as research becomes more complex and specialized, scientists must spend more time reviewing publications, designing sophisticated experi
Several researchers have taken a broad view of scientific progress over the last 50 years and come to the same troubling conclusion: Scientific productivity is declining. It’s taking more time, more funding, and larger teams to make discoveries that once came faster and cheaper. Although a variety of explanations have been offered for the slowdown, one is that, as research becomes more complex and specialized, scientists must spend more time reviewing publications, designing sophisticated experiments, and analyzing data.
Now, the philanthropically funded research lab FutureHouse is seeking to accelerate scientific research with an AI platform designed to automate many of the critical steps on the path toward scientific progress. The platform is made up of a series of AI agents specialized for tasks including information retrieval, information synthesis, chemical synthesis design, and data analysis.
FutureHouse founders Sam Rodriques PhD ’19 and Andrew White believe that by giving every scientist access to their AI agents, they can break through the biggest bottlenecks in science and help solve some of humanity’s most pressing problems.
“Natural language is the real language of science,” Rodriques says. “Other people are building foundation models for biology, where machine learning models speak the language of DNA or proteins, and that’s powerful. But discoveries aren’t represented in DNA or proteins. The only way we know how to represent discoveries, hypothesize, and reason is with natural language.”
Finding big problems
For his PhD research at MIT, Rodriques sought to understand the inner workings of the brain in the lab of Professor Ed Boyden.
“The entire idea behind FutureHouse was inspired by this impression I got during my PhD at MIT that even if we had all the information we needed to know about how the brain works, we wouldn’t know it because nobody has time to read all the literature,” Rodriques explains. “Even if they could read it all, they wouldn’t be able to assemble it into a comprehensive theory. That was a foundational piece of the FutureHouse puzzle.”
Rodriques wrote about the need for new kinds of large research collaborations as the last chapter of his PhD thesis in 2019, and though he spent some time running a lab at the Francis Crick Institute in London after graduation, he found himself gravitating toward broad problems in science that no single lab could take on.
“I was interested in how to automate or scale up science and what kinds of new organizational structures or technologies would unlock higher scientific productivity,” Rodriques says.
When Chat-GPT 3.5 was released in November 2022, Rodriques saw a path toward more powerful models that could generate scientific insights on their own. Around that time, he also met Andrew White, a computational chemist at the University of Rochester who had been granted early access to Chat-GPT 4. White had built the first large language agent for science, and the researchers joined forces to start FutureHouse.
The founders started out wanting to create distinct AI tools for tasks like literature searches, data analysis, and hypothesis generation. They began with data collection, eventually releasing PaperQA in September 2024, which Rodriques calls the best AI agent in the world for retrieving and summarizing information in scientific literature. Around the same time, they released Has Anyone, a tool that lets scientists determine if anyone has conducted specific experiments or explored specific hypotheses.
“We were just sitting around asking, ‘What are the kinds of questions that we as scientists ask all the time?’” Rodriques recalls.
When FutureHouse officially launched its platform on May 1 of this year, it rebranded some of its tools. Paper QA is now Crow, and Has Anyone is now called Owl. Falcon is an agent capable of compiling and reviewing more sources than Crow. Another new agent, Phoenix, can use specialized tools to help researchers plan chemistry experiments. And Finch is an agent designed to automate data driven discovery in biology.
On May 20, the company demonstrated a multi-agent scientific discovery workflow to automate key steps of the scientific process and identify a new therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide. In June, FutureHouse released ether0, a 24B open-weights reasoning model for chemistry.
“You really have to think of these agents as part of a larger system,” Rodriques says. “Soon, the literature search agents will be integrated with the data analysis agent, the hypothesis generation agent, an experiment planning agent, and they will all be engineered to work together seamlessly.”
Agents for everyone
Today anyone can access FutureHouse’s agents at platform.futurehouse.org. The company’s platform launch generated excitement in the industry, and stories have started to come in about scientists using the agents to accelerate research.
One of FutureHouse’s scientists used the agents to identify a gene that could be associated with polycystic ovary syndrome and come up with a new treatment hypothesis for the disease. Another researcher at the Lawrence Berkeley National Laboratory used Crow to create an AI assistant capable of searching the PubMed research database for information related to Alzheimer’s disease.
Scientists at another research institution have used the agents to conduct systematic reviews of genes relevant to Parkinson’s disease, finding FutureHouse’s agents performed better than general agents.
Rodriques says scientists who think of the agents less like Google Scholar and more like a smart assistant scientist get the most out of the platform.
“People who are looking for speculation tend to get more mileage out of Chat-GPT o3 deep research, while people who are looking for really faithful literature reviews tend to get more out of our agents,” Rodriques explains.
Rodriques also thinks FutureHouse will soon get to a point where its agents can use the raw data from research papers to test the reproducibility of its results and verify conclusions.
In the longer run, to keep scientific progress marching forward, Rodriques says FutureHouse is working on embedding its agents with tacit knowledge to be able to perform more sophisticated analyses while also giving the agents the ability to use computational tools to explore hypotheses.
“There have been so many advances around foundation models for science and around language models for proteins and DNA, that we now need to give our agents access to those models and all of the other tools people commonly use to do science,” Rodriques says. “Building the infrastructure to allow agents to use more specialized tools for science is going to be critical.”
FutureHouse seeks to accelerate scientific research with an AI platform designed to automate many of the most critical steps on the path toward scientific progress.
Wang's newly established role will strengthen Cornell Tech’s leadership in digital health and artificial intelligence, while also expanding interdisciplinary collaboration between Cornell Tech and Weill Cornell Medicine.
Wang's newly established role will strengthen Cornell Tech’s leadership in digital health and artificial intelligence, while also expanding interdisciplinary collaboration between Cornell Tech and Weill Cornell Medicine.
Weill Cornell Medicine investigators have found that an immune “tolerance” to gut microbes depends on an ancient bacterial-sensing protein that is normally considered a trigger for inflammation.
Weill Cornell Medicine investigators have found that an immune “tolerance” to gut microbes depends on an ancient bacterial-sensing protein that is normally considered a trigger for inflammation.
Weill Cornell Medicine researcher Nancy Du received a $500,000 grant from the Congressionally Directed Medical Research Programs at the U.S. Department of Defense, but a stop-work order brought her research to a halt in April.
Weill Cornell Medicine researcher Nancy Du received a $500,000 grant from the Congressionally Directed Medical Research Programs at the U.S. Department of Defense, but a stop-work order brought her research to a halt in April.
This summer’s mix includes an epic family saga, a sci-fi trilogy, a Delaware River journey, a history of military hubris, two new books about the people behind AI, and current science about our gut biomes.
This summer’s mix includes an epic family saga, a sci-fi trilogy, a Delaware River journey, a history of military hubris, two new books about the people behind AI, and current science about our gut biomes.
Artist collective brings ‘intraterrestrial’ worlds to Peabody Museum
The bottle caps washed up along the beaches of Australia looking almost like miniature planets. Some looked like flat, hard planets made of marble; others looked watery and remarkably like Earth. Many of them had been colonized and transformed by aquatic invertebrates called bryozoans.
The peculiar sea trash caught the imagination of the art collective TRES and formed the backbone of their exhibit, “Castaway: The Afterlife of Plastic,” now on display at Harvard’s Peabody Museum of Archaeology & Ethnology. Over a 2½-month road trip in a Toyota camper van, the Mexico City-based duo Ilana Boltvinik and Rodrigo Viñas photographed the bottle caps — as well as soda cans, shoe leather, plastic doll parts, deodorant containers, and rubber gloves — they found washed up along the Australian coast.
“Even our debris has become a platform for other types of life.”
– Ilana Boltvinik
TRES is not new to finding beauty in what others might overlook: Previous projects have featured used chewing gum scraped off the streets of Mexico City, cigarette butts, and even found bottles full of urine. “One of our main concerns is to try to offer a different perspective on trash,” said Viñas. “We’re proposing a more intimate relationship with our residues.”
The inspiration for “Castaway” came during a previous project, “Ubiquitous Trash,” which collected and examined trash collected in Hong Kong. They found bottle caps printed with the image of the Hong Kong actor and celebrity chef Nicholas Tse, but that type of bottle was only available in mainland China, Boltvinik said.
“One of the questions we had at the beginning, because we like following the traces of things, was, ‘OK, this is the bottle cap, where is the rest of the bottle?’ They’re probably at the bottom of the ocean. That made us think of bottle caps as the tips of icebergs that are a small part of a very large story.”
Calcium deposits are visible in this bottle cap in “From the Future to the Present.”
The pair expected to find beautiful bottle caps on their Australian road trip, but they were surprised by the strange, coral-like substance they found growing on and inside them. The substance sometimes carved holes in the plastic or turned its surface into entirely new shapes and textures that looked like the surfaces of alien worlds. They consulted Paul Taylor, an invertebrate paleontologist and bryozoologist at the Natural History Museum in London, who identified the growths as the calcium deposits of jellyella eburnea, a species in the phylum Bryozoa.
Bryozoans are microscopic invertebrates that work together to build the elaborate calcium-based structures that TRES encountered. Bryozoans are known for the division of labor within their colonies. Some of them filter water; others specialize in reproduction; still others construct their homes.
“It’s another universe, it’s amazing,” Boltvinik said.
Trees in Yallingup Beach, Western Australia, resemble found rope in “Parallel Lives I.”
The exhibit invites viewers to break down the barriers between natural and unnatural, valuable and disposable, good and bad. After all, plastic has become new home worlds for an “intraterrestrial” life form, as TRES put it, and that life form has terraformed those worlds in its image.
The exhibit is the result of the Robert Gardner Fellowship in Photography, a Peabody Museum effort that funds established artists to create and publish a major work of photography “on the human condition anywhere in the world.” TRES received the fellowship in 2016.
Ilisa Barbash, curator of visual anthropology at the Peabody Museum and the curator of “Castaway,” said the pieces raise a question that often comes up in her field.
“It’s always a problem in anthropology, the aesthetics, especially when you’re dealing with difficult topics — trauma or war or garbage. What if the pictures are beautiful?”
The exhibition also draws connections to Harvard’s scientific history. Alexander Agassiz, son of Louis Agassiz, who founded Harvard’s Museum of Comparative Zoology, led expeditions to Australia in the same region that TRES explored. The Peabody Museum collaborated with the Museum of Comparative Zoology’s Ernst Mayr Library to include actual jellyella eburnea structures from Australia in the exhibit.
“We are not in control of everything,” Boltvinik said. “Even our debris has become a platform for other types of life. It’s not that it was ever designed for something like that, but the world is bigger than humans, and things that happen in the world are bigger than humans.”
Ylana Lopez oversees programs and events at the Martin Trust Center for MIT Entrepreneurship. The Trust Center offers more than 60 entrepreneurship and innovation courses across campus, a dedicated entrepreneurship and innovation track for students pursuing their MBA, online courses for self-learners at MIT and around the globe, and programs for people both affiliated and not affiliated with the Institute. As assistant director, academics and events, at the Trust Center, Lopez leads an array of
Ylana Lopez oversees programs and events at the Martin Trust Center for MIT Entrepreneurship. The Trust Center offers more than 60 entrepreneurship and innovation courses across campus, a dedicated entrepreneurship and innovation track for students pursuing their MBA, online courses for self-learners at MIT and around the globe, and programs for people both affiliated and not affiliated with the Institute. As assistant director, academics and events, at the Trust Center, Lopez leads an array of programs and events, while also assisting students and faculty members.
After graduating from Rutgers University, Lopez conducted research in human-computer interaction at Princeton University. After Princeton, she worked for the health care software company Epic Systems, in quality management and user experience. While at Epic Systems, she was simultaneously working on a startup with two of her friends, Kiran Sharma and Dinuri Rupasinghe. One of the startup co-founders, who was an MIT undergraduate student, applied for them to take part in the Trust Center’s flagship startup accelerator delta v, and the trio was accepted.
Delta v is a highly competitive entrepreneurial program, with 20 to 25 startup teams accepted each year, which runs annually from June to August. At the end of each month, there is a mock board meeting with a board of advisors consisting of industry experts specifically curated to support each startup team’s goals. Programming, coaching sessions, workshops, lectures, and pitch practices take place throughout delta v, and the program culminates in September with a demo day in Kresge Auditorium with thousands of people in attendance.
Prior to delta v, Lopez decided to leave her full-time job to focus solely on the startup. Once she and her partners went their separate ways, she was looking for a career change, which led her to reflect on her formative summer at MIT. In spring 2023, Lopez applied for an open position at the Trust Center to be an academic coordinator. Soon after, she was offered and accepted the role, and a year later was promoted to assistant director for academics and events. Lopez’s time at MIT has come full circle as her current position includes being a co-director of delta v. Like many of her colleagues who are serial entrepreneurs, Lopez has also started a design studio on the side in the past year called Mr. Mango, providing creative design services for film and music industries.
Lopez has always loved education and planned to become a teacher before deciding to enter the field of technology. Because of this, she describes working at MIT, and being a staff member in the Trust Center, as having the best of both worlds. While delta v is the flagship accelerator, Lopez also supports shorter programs including MIT Fuse, a three-week, hands-on startup sprint that takes place during Independent Activities Period (IAP), and t=0, a festival of events that kicks off each school year to promote entrepreneurship at MIT. In addition to delta v, other programs are available to those outside of MIT, as the Trust Center sees the value of bringing together an ecosystem that is not solely composed of those at the Institute.
At the core of the Trust Center is the belief that entrepreneurship is a tool to change the world. The staff also believe entrepreneurship can be taught, and is not just for a select few. Lopez and her colleagues are highly collaborative and work in an office space that they affectionately call “the bullpen.” The office layout and shared nature of their work mean that no one is a stranger. With at least two events per week, late nights can turn into early mornings, but Lopez and her colleagues love what they do. She is grateful for the growth she has had in her time at the Trust Center and the opportunity to be a part of a motivated, fun, and talented team.
Trust Center managing director Bill Aulet, the Ethernet Inventors Professor of the Practice of Entrepreneurship, cannot sing Lopez’s praises enough. “In my now almost two decades running this center, I have never seen anyone better at really understanding the students, our customers, and translating that back into high-quality and creative programs that delight them and serve the mission of our center, MIT Sloan, and MIT more broadly. We are so fortunate to have her.”
Soundbytes
Q: What is your favorite project that you have worked on?
A: This semester we piloted the Martin Trust Center Startup Pass. It is an opportunity for startups, regardless of what stage they are in, to have a daily, dedicated workspace at the Trust Center to make progress on their ventures. We set aside half of our space for what we call “the beehive” for startups to work alongside other founders and active builders at MIT. It’s great for students to sit alongside people who are building awesome things and will provide feedback, offer support, and really build a community that is entirely based off the spirit and collaboration that naturally comes to entrepreneurs. Entrepreneurship can be lonely; therefore, a lot of our efforts go toward helping build networks that make it less so. In just one semester, we’ve already created a community of over 80 founders across MIT!
I’m also excited about revamping one of our rooms into a creative studio. We noticed that startups could benefit from having a space that has capabilities for creating content like podcasts, photography, videography, and other types of creative work. Those things are important in entrepreneurship, so we are currently cultivating a space that any entrepreneur at MIT can utilize.
Q: How would you describe the MIT community?
A: We have such a wonderful community here. The Trust Center supports all of MIT, so we have many programs that allow us to see a lot of people. There can be silos, so it’s great that we bring people together, regardless of their backgrounds, experience, or interests, in one place to become entrepreneurs. The MIT community is a group of inspiring, passionate people who are very welcoming. It’s a very exciting community to be a part of.
Q: What advice would you give someone who is starting a job at MIT?
A: If your day-to-day is typically in one office or setting, over time it can be easy to find yourself in a bubble. I highly recommend breaking out of your bubble by making the effort to meet as many people outside of the group that you work with directly as possible. I have met a number of people across different departments, even if we don’t have much direct overlap in terms of work, and they have been incredibly helpful, gracious, and welcoming. You never know if an introductory or impromptu conversation with someone might lead to an awesome collaboration or new initiative. It’s great being in a community with so many talented people.
Leveraging the strengths of two world-class research institutions, MIT and Mass General Brigham (MGB) recently celebrated the launch of the MIT-MGB Seed Program. The new initiative, which is supported by Analog Devices Inc. (ADI), will fund joint research projects led by researchers at MIT and Mass General Brigham. These collaborative projects will advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at sca
Leveraging the strengths of two world-class research institutions, MIT and Mass General Brigham (MGB) recently celebrated the launch of the MIT-MGB Seed Program. The new initiative, which is supported by Analog Devices Inc. (ADI), will fund joint research projects led by researchers at MIT and Mass General Brigham. These collaborative projects will advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at scale.
The program represents a unique opportunity to dramatically accelerate innovations that address some of the most urgent challenges in human health. By supporting interdisciplinary teams from MIT and Mass General Brigham, including both researchers and clinicians, the seed program will foster groundbreaking work that brings together expertise in artificial intelligence, machine learning, and measurement and sensing technologies with pioneering clinical research and patient care.
“The power of this program is that it combines MIT’s strength in science, engineering, and innovation with Mass General Brigham’s world-class scientific and clinical research. With the support and incentive to work together, researchers and clinicians will have the freedom to tackle compelling problems and find novel ways to overcome them to achieve transformative changes in patient care,” says Sally Kornbluth, president of MIT.
“The MIT-MGB Seed Program will enable cross-disciplinary collaboration to advance transformative research and breakthrough science. By combining the collective strengths and expertise of our great institutions, we can transform medical care and drive innovation and discovery with speed,” says Anne Klibanski, president and CEO of Mass General Brigham.
The initiative is funded by a gift from ADI. Over the next three years, the ADI Fund for Health and Life Sciences will support approximately six joint projects annually, with funding split between the two institutions.
“The converging domains of biology, medicine, and computing promise a new era of health-care efficacy, efficiency, and access. ADI has enjoyed a long and fruitful history of collaboration with MIT and Mass General Brigham, and we are excited by this new initiative’s potential to transform the future of patient care,” adds Vincent Roche, CEO and chair of the board of directors at ADI.
In addition to funding, teams selected for the program will have access to entrepreneurial workshops, including some hosted by The Engine — an MIT-built venture firm focused on tough tech. These sessions will connect researchers with company founders, investors, and industry leaders, helping them chart a path from breakthrough discoveries in the lab to real-world impact.
The program will launch an open call for proposals to researchers at MIT and Mass General Brigham. The first cohort of funded projects is expected to launch in fall 2025. Awardees will be selected by a joint review committee composed of MIT and Mass General Brigham experts.
According to MIT’s faculty lead for the MIT-MGB Seed Program, Alex K. Shalek, building collaborative research teams with leaders from both institutions could help fill critical gaps that often impede innovation in health and life sciences. Shalek also serves as director of the Institute for Medical Engineering & Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research.
“Clinicians often see where current interventions fall short, but may lack the scientific tools or engineering expertise needed to develop new ones. Conversely, MIT researchers may not fully grasp these clinical challenges or have access to the right patient data and samples,” explains Shalek, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard. “By supporting bilateral collaborations and building a community across disciplines, this program is poised to drive critical advances in diagnostics, therapeutics, and AI-driven health applications.”
Emery Brown, a practicing anesthesiologist at Massachusetts General Hospital, will serve alongside Shalek as Mass General Brigham’s faculty lead for the program.
“The MIT-MGB Seed Program creates a perfect storm. The program will provide an opportunity for MIT faculty to bring novel science and engineering to attack and solve important clinical problems,” adds Brown, who is also the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT. “The pursuit of solutions to important and challenging clinical problems by Mass General Brigham physicians and scientists will no doubt spur MIT scientists and engineers to develop new technologies, or find novel applications of existing technologies.”
The MIT-MGB Seed Program is a flagship initiative in the MIT Health and Life Sciences Collaborative (MIT HEALS). It reflects MIT HEALS’ core mission to establish MIT as a central hub for health and life sciences innovation and translation, and to leverage connections with other world-class research institutions in the Boston area.
“This program exemplifies the power of interdisciplinary research,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of engineering, and head of MIT HEALS. “It creates a critical bridge between clinical practice and technological innovation — two areas that must be deeply connected to advance real-world solutions.”
The program’s launch was celebrated at a special event at MIT’s Samberg Conference Center on March 31.
Vincent Roche, president and CEO of Analog Devices (left); Sally Kornbluth, president of MIT (center); and Anne Klibanski, president and CEO of Mass General Brigham, held a signing ceremony officially launching the MIT-MGB Seed Program. The program will fund collaborative projects, led by MIT and Mass General Brigham researchers, that advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at scale.
Diffusion models like OpenAI’s DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt these systems to generate an image, create a video, or refine a blueprint, and come back with ideas they hadn’t considered before.But did you know that generative artificial intelligence (GenAI) models are also making headway in creating working robots? Recent diffusion-based approaches have generated structures and the systems that control them from scratch. With or withou
Diffusion models like OpenAI’s DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt these systems to generate an image, create a video, or refine a blueprint, and come back with ideas they hadn’t considered before.
But did you know that generative artificial intelligence (GenAI) models are also making headway in creating working robots? Recent diffusion-based approaches have generated structures and the systems that control them from scratch. With or without a user’s input, these models can make new designs and then evaluate them in simulation before they’re fabricated.
A new approach from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) applies this generative know-how toward improving humans’ robotic designs. Users can draft a 3D model of a robot and specify which parts they’d like to see a diffusion model modify, providing its dimensions beforehand. GenAI then brainstorms the optimal shape for these areas and tests its ideas in simulation. When the system finds the right design, you can save and then fabricate a working, real-world robot with a 3D printer, without requiring additional tweaks.
The researchers used this approach to create a robot that leaps up an average of roughly 2 feet, or 41 percent higher than a similar machine they created on their own. The machines are nearly identical in appearance: They’re both made of a type of plastic called polylactic acid, and while they initially appear flat, they spring up into a diamond shape when a motor pulls on the cord attached to them. So what exactly did AI do differently?
A closer look reveals that the AI-generated linkages are curved, and resemble thick drumsticks (the musical instrument drummers use), whereas the standard robot’s connecting parts are straight and rectangular.
Better and better blobs
The researchers began to refine their jumping robot by sampling 500 potential designs using an initial embedding vector — a numerical representation that captures high-level features to guide the designs generated by the AI model. From these, they selected the top 12 options based on performance in simulation and used them to optimize the embedding vector.
This process was repeated five times, progressively guiding the AI model to generate better designs. The resulting design resembled a blob, so the researchers prompted their system to scale the draft to fit their 3D model. They then fabricated the shape, finding that it indeed improved the robot’s jumping abilities.
The advantage of using diffusion models for this task, according to co-lead author and CSAIL postdoc Byungchul Kim, is that they can find unconventional solutions to refine robots.
“We wanted to make our machine jump higher, so we figured we could just make the links connecting its parts as thin as possible to make them light,” says Kim. “However, such a thin structure can easily break if we just use 3D printed material. Our diffusion model came up with a better idea by suggesting a unique shape that allowed the robot to store more energy before it jumped, without making the links too thin. This creativity helped us learn about the machine’s underlying physics.”
The team then tasked their system with drafting an optimized foot to ensure it landed safely. They repeated the optimization process, eventually choosing the best-performing design to attach to the bottom of their machine. Kim and his colleagues found that their AI-designed machine fell far less often than its baseline, to the tune of an 84 percent improvement.
The diffusion model’s ability to upgrade a robot’s jumping and landing skills suggests it could be useful in enhancing how other machines are designed. For example, a company working on manufacturing or household robots could use a similar approach to improve their prototypes, saving engineers time normally reserved for iterating on those changes.
The balance behind the bounce
To create a robot that could jump high and land stably, the researchers recognized that they needed to strike a balance between both goals. They represented both jumping height and landing success rate as numerical data, and then trained their system to find a sweet spot between both embedding vectors that could help build an optimal 3D structure.
The researchers note that while this AI-assisted robot outperformed its human-designed counterpart, it could soon reach even greater new heights. This iteration involved using materials that were compatible with a 3D printer, but future versions would jump even higher with lighter materials.
Co-lead author and MIT CSAIL PhD student Tsun-Hsuan “Johnson” Wang says the project is a jumping-off point for new robotics designs that generative AI could help with.
“We want to branch out to more flexible goals,” says Wang. “Imagine using natural language to guide a diffusion model to draft a robot that can pick up a mug, or operate an electric drill.”
Kim says that a diffusion model could also help to generate articulation and ideate on how parts connect, potentially improving how high the robot would jump. The team is also exploring the possibility of adding more motors to control which direction the machine jumps and perhaps improve its landing stability.
The researchers’ work was supported, in part, by the National Science Foundation’s Emerging Frontiers in Research and Innovation program, the Singapore-MIT Alliance for Research and Technology’s Mens, Manus and Machina program, and the Gwangju Institute of Science and Technology (GIST)-CSAIL Collaboration. They presented their work at the 2025 International Conference on Robotics and Automation.
Leveraging the strengths of two world-class research institutions, MIT and Mass General Brigham (MGB) recently celebrated the launch of the MIT-MGB Seed Program. The new initiative, which is supported by Analog Devices Inc. (ADI), will fund joint research projects led by researchers at MIT and Mass General Brigham. These collaborative projects will advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at sca
Leveraging the strengths of two world-class research institutions, MIT and Mass General Brigham (MGB) recently celebrated the launch of the MIT-MGB Seed Program. The new initiative, which is supported by Analog Devices Inc. (ADI), will fund joint research projects led by researchers at MIT and Mass General Brigham. These collaborative projects will advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at scale.
The program represents a unique opportunity to dramatically accelerate innovations that address some of the most urgent challenges in human health. By supporting interdisciplinary teams from MIT and Mass General Brigham, including both researchers and clinicians, the seed program will foster groundbreaking work that brings together expertise in artificial intelligence, machine learning, and measurement and sensing technologies with pioneering clinical research and patient care.
“The power of this program is that it combines MIT’s strength in science, engineering, and innovation with Mass General Brigham’s world-class scientific and clinical research. With the support and incentive to work together, researchers and clinicians will have the freedom to tackle compelling problems and find novel ways to overcome them to achieve transformative changes in patient care,” says Sally Kornbluth, president of MIT.
“The MIT-MGB Seed Program will enable cross-disciplinary collaboration to advance transformative research and breakthrough science. By combining the collective strengths and expertise of our great institutions, we can transform medical care and drive innovation and discovery with speed,” says Anne Klibanski, president and CEO of Mass General Brigham.
The initiative is funded by a gift from ADI. Over the next three years, the ADI Fund for Health and Life Sciences will support approximately six joint projects annually, with funding split between the two institutions.
“The converging domains of biology, medicine, and computing promise a new era of health-care efficacy, efficiency, and access. ADI has enjoyed a long and fruitful history of collaboration with MIT and Mass General Brigham, and we are excited by this new initiative’s potential to transform the future of patient care,” adds Vincent Roche, CEO and chair of the board of directors at ADI.
In addition to funding, teams selected for the program will have access to entrepreneurial workshops, including some hosted by The Engine — an MIT-built venture firm focused on tough tech. These sessions will connect researchers with company founders, investors, and industry leaders, helping them chart a path from breakthrough discoveries in the lab to real-world impact.
The program will launch an open call for proposals to researchers at MIT and Mass General Brigham. The first cohort of funded projects is expected to launch in fall 2025. Awardees will be selected by a joint review committee composed of MIT and Mass General Brigham experts.
According to MIT’s faculty lead for the MIT-MGB Seed Program, Alex K. Shalek, building collaborative research teams with leaders from both institutions could help fill critical gaps that often impede innovation in health and life sciences. Shalek also serves as director of the Institute for Medical Engineering & Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research.
“Clinicians often see where current interventions fall short, but may lack the scientific tools or engineering expertise needed to develop new ones. Conversely, MIT researchers may not fully grasp these clinical challenges or have access to the right patient data and samples,” explains Shalek, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard. “By supporting bilateral collaborations and building a community across disciplines, this program is poised to drive critical advances in diagnostics, therapeutics, and AI-driven health applications.”
Emery Brown, a practicing anesthesiologist at Massachusetts General Hospital, will serve alongside Shalek as Mass General Brigham’s faculty lead for the program.
“The MIT-MGB Seed Program creates a perfect storm. The program will provide an opportunity for MIT faculty to bring novel science and engineering to attack and solve important clinical problems,” adds Brown, who is also the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT. “The pursuit of solutions to important and challenging clinical problems by Mass General Brigham physicians and scientists will no doubt spur MIT scientists and engineers to develop new technologies, or find novel applications of existing technologies.”
The MIT-MGB Seed Program is a flagship initiative in the MIT Health and Life Sciences Collaborative (MIT HEALS). It reflects MIT HEALS’ core mission to establish MIT as a central hub for health and life sciences innovation and translation, and to leverage connections with other world-class research institutions in the Boston area.
“This program exemplifies the power of interdisciplinary research,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of engineering, and head of MIT HEALS. “It creates a critical bridge between clinical practice and technological innovation — two areas that must be deeply connected to advance real-world solutions.”
The program’s launch was celebrated at a special event at MIT’s Samberg Conference Center on March 31.
Vincent Roche, president and CEO of Analog Devices (left); Sally Kornbluth, president of MIT (center); and Anne Klibanski, president and CEO of Mass General Brigham, held a signing ceremony officially launching the MIT-MGB Seed Program. The program will fund collaborative projects, led by MIT and Mass General Brigham researchers, that advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at scale.
Diffusion models like OpenAI’s DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt these systems to generate an image, create a video, or refine a blueprint, and come back with ideas they hadn’t considered before.But did you know that generative artificial intelligence (GenAI) models are also making headway in creating working robots? Recent diffusion-based approaches have generated structures and the systems that control them from scratch. With or withou
Diffusion models like OpenAI’s DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt these systems to generate an image, create a video, or refine a blueprint, and come back with ideas they hadn’t considered before.
But did you know that generative artificial intelligence (GenAI) models are also making headway in creating working robots? Recent diffusion-based approaches have generated structures and the systems that control them from scratch. With or without a user’s input, these models can make new designs and then evaluate them in simulation before they’re fabricated.
A new approach from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) applies this generative know-how toward improving humans’ robotic designs. Users can draft a 3D model of a robot and specify which parts they’d like to see a diffusion model modify, providing its dimensions beforehand. GenAI then brainstorms the optimal shape for these areas and tests its ideas in simulation. When the system finds the right design, you can save and then fabricate a working, real-world robot with a 3D printer, without requiring additional tweaks.
The researchers used this approach to create a robot that leaps up an average of roughly 2 feet, or 41 percent higher than a similar machine they created on their own. The machines are nearly identical in appearance: They’re both made of a type of plastic called polylactic acid, and while they initially appear flat, they spring up into a diamond shape when a motor pulls on the cord attached to them. So what exactly did AI do differently?
A closer look reveals that the AI-generated linkages are curved, and resemble thick drumsticks (the musical instrument drummers use), whereas the standard robot’s connecting parts are straight and rectangular.
Better and better blobs
The researchers began to refine their jumping robot by sampling 500 potential designs using an initial embedding vector — a numerical representation that captures high-level features to guide the designs generated by the AI model. From these, they selected the top 12 options based on performance in simulation and used them to optimize the embedding vector.
This process was repeated five times, progressively guiding the AI model to generate better designs. The resulting design resembled a blob, so the researchers prompted their system to scale the draft to fit their 3D model. They then fabricated the shape, finding that it indeed improved the robot’s jumping abilities.
The advantage of using diffusion models for this task, according to co-lead author and CSAIL postdoc Byungchul Kim, is that they can find unconventional solutions to refine robots.
“We wanted to make our machine jump higher, so we figured we could just make the links connecting its parts as thin as possible to make them light,” says Kim. “However, such a thin structure can easily break if we just use 3D printed material. Our diffusion model came up with a better idea by suggesting a unique shape that allowed the robot to store more energy before it jumped, without making the links too thin. This creativity helped us learn about the machine’s underlying physics.”
The team then tasked their system with drafting an optimized foot to ensure it landed safely. They repeated the optimization process, eventually choosing the best-performing design to attach to the bottom of their machine. Kim and his colleagues found that their AI-designed machine fell far less often than its baseline, to the tune of an 84 percent improvement.
The diffusion model’s ability to upgrade a robot’s jumping and landing skills suggests it could be useful in enhancing how other machines are designed. For example, a company working on manufacturing or household robots could use a similar approach to improve their prototypes, saving engineers time normally reserved for iterating on those changes.
The balance behind the bounce
To create a robot that could jump high and land stably, the researchers recognized that they needed to strike a balance between both goals. They represented both jumping height and landing success rate as numerical data, and then trained their system to find a sweet spot between both embedding vectors that could help build an optimal 3D structure.
The researchers note that while this AI-assisted robot outperformed its human-designed counterpart, it could soon reach even greater new heights. This iteration involved using materials that were compatible with a 3D printer, but future versions would jump even higher with lighter materials.
Co-lead author and MIT CSAIL PhD student Tsun-Hsuan “Johnson” Wang says the project is a jumping-off point for new robotics designs that generative AI could help with.
“We want to branch out to more flexible goals,” says Wang. “Imagine using natural language to guide a diffusion model to draft a robot that can pick up a mug, or operate an electric drill.”
Kim says that a diffusion model could also help to generate articulation and ideate on how parts connect, potentially improving how high the robot would jump. The team is also exploring the possibility of adding more motors to control which direction the machine jumps and perhaps improve its landing stability.
The researchers’ work was supported, in part, by the National Science Foundation’s Emerging Frontiers in Research and Innovation program, the Singapore-MIT Alliance for Research and Technology’s Mens, Manus and Machina program, and the Gwangju Institute of Science and Technology (GIST)-CSAIL Collaboration. They presented their work at the 2025 International Conference on Robotics and Automation.
Following the resignation of the ETH Alumni Association Board of Directors, outgoing President Jeannine Pilloud looks back on the last two years and explains where the Association stands today.
Following the resignation of the ETH Alumni Association Board of Directors, outgoing President Jeannine Pilloud looks back on the last two years and explains where the Association stands today.
The ETH professor has received one of the most prestigious awards in space research. According to the laudation, Thomas Zurbuchen has distinguished himself by his contributions to the aerospace community.
The ETH professor has received one of the most prestigious awards in space research. According to the laudation, Thomas Zurbuchen has distinguished himself by his contributions to the aerospace community.
Es Devlin, the winner of the 2025 Eugene McDermott Award in the Arts at MIT, creates settings for people to gather — whether it’s a few people in a room or crowds swelling a massive stadium — arenas in which to dissolve one’s individual sense of self into the greater collective. She herself contains multitudes; equally at home with 17th century metaphysical English poet John Donne, 21st century icon of music and fashion Lady Gaga, or Italian theoretical physicist Carlo Rovelli.In the course of t
Es Devlin, the winner of the 2025 Eugene McDermott Award in the Arts at MIT, creates settings for people to gather — whether it’s a few people in a room or crowds swelling a massive stadium — arenas in which to dissolve one’s individual sense of self into the greater collective. She herself contains multitudes; equally at home with 17th century metaphysical English poet John Donne, 21st century icon of music and fashion Lady Gaga, or Italian theoretical physicist Carlo Rovelli.
In the course of the artist and designer’s three-decade career, Devlin has created an exploded paint interpretation of the U.K. flag for the Closing Ceremony of the 2012 London Olympics, a box of illuminated rainfall for a production of the Crucible, a 65-foot diameter AI-generated poetry pavilion for the World Expo, an indoor forest for the COP26 Climate Conference, a revolving luminous library for over 200,000 in Milan, Beyonce’s Renaissance tour, and two Super Bowl halftime shows. But Devlin also works on a much smaller scale: the human face. Her world-building is rooted in the earliest technologies of reading and drawing: the simple acts of the eye and hand.
For Congregation in 2024, she made chalk and charcoal drawings of 50 strangers. Before this project, Devlin says, she had most likely drawn around 50 portraits in total over the course of her practice — mostly family or friends, or the occasional covert sketch of a stranger on the subway. But drawing strangers required a different form of attention. “I was looking at another, who often looked different from me in many ways. Their skin pigmentation might be different, the orientation of their nose, eyes, and forehead might be other to what I was used to seeing in the mirror, and I was fraught with anxiety and concern to do them justice, and at pains not to offend,” she recalls.
As she drew, she warded off the desire to please, feeling her unconscious biases surface, but eventually, in this wordless space, found herself in intense communion. “I gradually became absorbed in each person's eyes. It felt like falling into a well, but knowing I was held by an anchor, that I would be drawn out,” she says, “In each case, I thought, ‘well, this is it. Here we are. This is the answer to everything, the continuity between me and the other.’” She calls each sitter a co-creator of the piece.
Devlin’s project inspired a series of drawing sessions at MIT, where students, faculty, and staff across the Institute — without any prior drawing experience necessary — were paired with strangers and asked to draw each other in silence for five minutes. In these 11 sessions held over the course of the semester, participants practiced rendering a stranger’s features on the page, and then the sitter spoke and shared their story. There were no guidelines about what to say, or even how to draw — but the final product mattered less than the process, the act of being in another’s presence and looking deeply.
If pop concerts are the technology to transform private emotional truth into public feeling — the lyrics sung to the bathroom mirror now belted in choruses of thousands — Devlin finds that same stripped-down intimacy in all her works, asking us to bare the most elemental versions of ourselves.
“We’re in a moment where we’re really having a hard time speaking to one another. We wanted to find a way to take the lessons from the work that Es Devlin has done to practice listening to one another and building connections within this very broad community that we call MIT,” says Sara Brown, an associate professor in the Music and Theater Arts Section who facilitated drawing sessions. The drawings were then displayed in a pop-up group exhibition, MIT Face to Face, where 80 easels were positioned to face the center of the room like a two-dimensional choir, forming a communal portrait of MIT.
During her residency at MIT, Devlin toured student labs, spoke with students and faculty from theater arts, discussed the creative uses of AI with technologists and curators, and met with neuroscientists. “I had my brain scanned two days ago at very short notice,” she says, “a functioning MRI scan to help me understand more deeply the geography and architecture of my own mind.”
“The question I get asked most is, ‘How do you retain a sense of self when you are in collaboration with another, especially if it’s another who is celebrated and widely revered?’” she says, “And I found an answer to that question: You have to be prepared to lose yourself. You have to be prepared to sublimate your sense of self, to see through the eyes of another, and through that practice, you will begin to find more deeply who you are.”
She is influenced by the work of philosopher and neuroscientist Iain Gilchrist, who suggests that a society dominated by the mode of attention of the left hemisphere — the part of the brain broadly in charge of language processing and logical thinking — also needs to be balanced by the right hemisphere, which operates nonverbal modes of attention. While the left hemisphere categorizes and separates, the right attends to the universe as an oceanic whole. And it is under the power of the right hemisphere’s mode of attention, Devlin says, that she enters the flow state of drawing, a place outside the confines of language, that enables her to feel a greater sense of unity with the entire cosmos.
Whether it’s drawing a stranger with a pencil and paper, or working with collaborators, Devlin believes the key to self understanding is, paradoxically, losing oneself.
In all her works, she seeks the ecstatic moment when the boundaries between self and world become more porous. In a time of divisiveness, her message is important. “I think it’s really to do with fear of other,” she says, “and I believe that dislodging fear is something that has to be practiced, like learning a new instrument.” What would it be like to regain a greater equilibrium between the modes of attention of both hemispheres of the brain, the sense of distinctness and the cosmic whole at once? “It could be absolutely definitive, and potentially stave off human extinction,” she says, “It’s at that level of urgency.”
Presented by the Council for the Arts at MIT, the Eugene McDermott Award for the Arts at MIT was first established by Margaret McDermott in honor of her husband, a legacy that is now carried on by their daughter, Mary McDermott Cook. The Eugene McDermott Award plays a unique role at the Institute by bringing the MIT community together to support MIT’s principal arts organizations: the Department of Architecture; the Program in Art, Culture and Technology; the Center for Art, Science and Technology; the List Visual Arts Center; the MIT Museum; and Music and Theater Arts. During her residency at MIT she presented a week of discussions with the MIT community’s students and faculty in theater, architecture, computer science, MIT Museum Studio, and more. She also presented a public artist talk with Museum of Modern Art Senior Curator of Architecture and Design Paola Antonelli that was one of the culminating events of the MIT arts festival, Artfinity.
Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundat
Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.
The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
Avani Ahuja, a mechanical engineering and electrical engineering major, conducts research in the Conformable Decoders group, where she is focused on developing a “wearable conformable breast ultrasound patch” that makes ultrasounds for breast cancer more accessible.
“Doing research in the Media Lab has had a huge impact on me, especially in the ways that we think about inclusivity in research,” Ahuja says.
In her research group, Ahuja works under Canan Dagdeviren, the LG Career Development Professor of Media Arts and Sciences. Ahuja plans to pursue a PhD in electrical engineering. She aspires to conduct research in electromechanical systems for women’s health applications and teach at the university level.
“I want to thank Professor Dagdeviren for all her support. It’s an honor to receive this scholarship, and it’s amazing to see that women’s health research is getting recognized in this way,” Ahuja says.
Julianna Lian studies mechanochemistry, organic, and polymer chemistry in the lab of Professor Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry. In addition to her studies, she serves the MIT community as an emergency medical technician (EMT) with MIT Emergency Medical Services, is a member of MIT THINK, and a ClubChem mentorship chair.
“Receiving this award has been a tremendous opportunity to not only reflect on how much I have learned, but also on the many, many people I have had the chance to learn from,” says Lian. “I am deeply grateful for the guidance, support, and encouragement of these teachers, mentors, and friends. And I am excited to carry forward the lasting curiosity and excitement for chemistry that they have helped inspire in me.”
Lian’s career goals post-graduation include pursuing a PhD in organic chemistry, to conduct research at the interface of synthetic chemistry and materials science, aided by computation, and to teach at the university level.
Jacqueline Prawira, a materials science and engineering major, joined the Center of Decarbonization and Electrification of Industry as a first-year Undergraduate Research Opportunities Program student and became a co-inventor on a patent and a research technician at spinout company Rock Zero. She has also worked in collaboration with Indigenous farmers and Diné College students on the Navajo Nation.
“I’ve become significantly more cognizant of how I listen to people and stories, the tangled messiness of real-world challenges, and the critical skills needed to tackle complex sustainability issues,” Prawira says.
Prawira is mentored by Yet-Ming Chiang, professor of materials science and engineering. Her career goals are to pursue a PhD in materials science and engineering and to research sustainable materials and processes to solve environmental challenges and build a sustainable society.
“Receiving the prestigious title of 2025 Goldwater Scholar validates my current trajectory in innovating sustainable materials and demonstrates my growth as a researcher,” Prawira says. “This award signifies my future impact in building a society where sustainability is the norm, instead of just another option.”
Alex Tang studies the effects of immunotherapy and targeted molecular therapy on the tumor microenvironment in metastatic colorectal cancer patients. He is supervised by professors Jonathan Chen at Northwestern University and Nir Hacohen at the Broad Institute of MIT and Harvard.
“My mentors and collaborators have been instrumental to my growth since I joined the lab as a freshman. I am incredibly grateful for the generous mentorship and support of Professor Hacohen and Professor Chen, who have taught me how to approach scientific investigation with curiosity and rigor,” says Tang. “I’d also like to thank my advisor Professor Adam Martin and first-year advisor Professor Angela Belcher for their guidance throughout my undergraduate career thus far. I am excited to carry forward this work as I progress in my career.” Tang intends to pursue physician-scientist training following graduation.
The Scholarship Program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.
Clockwise from top left: Avani Ahuja, Julianna Lian, Alex Tang and Jacqueline Prawira are MIT’s newest Goldwater Scholars.
A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers.They found that making stylistic or grammatical changes to messages increases the likelihood an LLM will recommend that a patient self-manage their reported health condition rather than come in for an appointm
A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers.
They found that making stylistic or grammatical changes to messages increases the likelihood an LLM will recommend that a patient self-manage their reported health condition rather than come in for an appointment, even when that patient should seek medical care.
Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.
This work “is strong evidence that models must be audited before use in health care — which is a setting where they are already in use,” says Marzyeh Ghassemi, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems, and senior author of the study.
These findings indicate that LLMs take nonclinical information into account for clinical decision-making in previously unknown ways. It brings to light the need for more rigorous studies of LLMs before they are deployed for high-stakes applications like making treatment recommendations, the researchers say.
“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don’t know,” adds Abinitha Gourabathina, an EECS graduate student and lead author of the study.
They are joined on the paper, which will be presented at the ACM Conference on Fairness, Accountability, and Transparency, by graduate student Eileen Pan and postdoc Walter Gerych.
Mixed messages
Large language models like OpenAI’s GPT-4 are being used to draft clinical notes and triage patient messages in health care facilities around the globe, in an effort to streamline some tasks to help overburdened clinicians.
A growing body of work has explored the clinical reasoning capabilities of LLMs, especially from a fairness point of view, but few studies have evaluated how nonclinical information affects a model’s judgment.
Interested in how gender impacts LLM reasoning, Gourabathina ran experiments where she swapped the gender cues in patient notes. She was surprised that formatting errors in the prompts, like extra white space, caused meaningful changes in the LLM responses.
To explore this problem, the researchers designed a study in which they altered the model’s input data by swapping or removing gender markers, adding colorful or uncertain language, or inserting extra space and typos into patient messages.
Each perturbation was designed to mimic text that might be written by someone in a vulnerable patient population, based on psychosocial research into how people communicate with clinicians.
For instance, extra spaces and typos simulate the writing of patients with limited English proficiency or those with less technological aptitude, and the addition of uncertain language represents patients with health anxiety.
“The medical datasets these models are trained on are usually cleaned and structured, and not a very realistic reflection of the patient population. We wanted to see how these very realistic changes in text could impact downstream use cases,” Gourabathina says.
They used an LLM to create perturbed copies of thousands of patient notes while ensuring the text changes were minimal and preserved all clinical data, such as medication and previous diagnosis. Then they evaluated four LLMs, including the large, commercial model GPT-4 and a smaller LLM built specifically for medical settings.
They prompted each LLM with three questions based on the patient note: Should the patient manage at home, should the patient come in for a clinic visit, and should a medical resource be allocated to the patient, like a lab test.
The researchers compared the LLM recommendations to real clinical responses.
Inconsistent recommendations
They saw inconsistencies in treatment recommendations and significant disagreement among the LLMs when they were fed perturbed data. Across the board, the LLMs exhibited a 7 to 9 percent increase in self-management suggestions for all nine types of altered patient messages.
This means LLMs were more likely to recommend that patients not seek medical care when messages contained typos or gender-neutral pronouns, for instance. The use of colorful language, like slang or dramatic expressions, had the biggest impact.
They also found that models made about 7 percent more errors for female patients and were more likely to recommend that female patients self-manage at home, even when the researchers removed all gender cues from the clinical context.
Many of the worst results, like patients told to self-manage when they have a serious medical condition, likely wouldn’t be captured by tests that focus on the models’ overall clinical accuracy.
“In research, we tend to look at aggregated statistics, but there are a lot of things that are lost in translation. We need to look at the direction in which these errors are occurring — not recommending visitation when you should is much more harmful than doing the opposite,” Gourabathina says.
The inconsistencies caused by nonclinical language become even more pronounced in conversational settings where an LLM interacts with a patient, which is a common use case for patient-facing chatbots.
But in follow-up work, the researchers found that these same changes in patient messages don’t affect the accuracy of human clinicians.
“In our follow up work under review, we further find that large language models are fragile to changes that human clinicians are not,” Ghassemi says. “This is perhaps unsurprising — LLMs were not designed to prioritize patient medical care. LLMs are flexible and performant enough on average that we might think this is a good use case. But we don’t want to optimize a health care system that only works well for patients in specific groups.”
The researchers want to expand on this work by designing natural language perturbations that capture other vulnerable populations and better mimic real messages. They also want to explore how LLMs infer gender from clinical text.
An MIT study finds non-clinical information in patient messages, like typos, extra whitespace, or colorful language, can reduce the accuracy of a large language model deployed to make treatment recommendations.
Urea is considered a possible key molecule in the origin of life. ETH researchers have discovered a previously unknown way in which this building block can form spontaneously on aqueous surfaces without the need for any additional energy.
Urea is considered a possible key molecule in the origin of life. ETH researchers have discovered a previously unknown way in which this building block can form spontaneously on aqueous surfaces without the need for any additional energy.
Health
An exercise drug?
Christiane Wrann in her lab.Niles Singer/Harvard Staff Photographer
Anna Lamb
Harvard Staff Writer
June 26, 2025
4 min read
Researchers hope to harness the cognitive benefits of a workout for Alzheimer’s patients with mobility issues
For years, researchers have seen a connection between exercise and the progression of cognitive disorders such as Alzheimer’s — but
Researchers hope to harness the cognitive benefits of a workout for Alzheimer’s patients with mobility issues
For years, researchers have seen a connection between exercise and the progression of cognitive disorders such as Alzheimer’s — but ramping up movement isn’t possible for many patients. A new study looks at how to mimic those benefits without having to hit the gym.
“We know that exercise does so many good things to the brain and against Alzheimer’s disease,” said senior author Christiane Wrann, assistant professor of medicine at the Cardiovascular Research Center at Massachusetts General Hospital and Harvard Medical School. “Instead of prescribing the exercise, we actually want to activate these molecular pathways using pharmacology to improve cognitive function in these patients.”
According to the Centers for Disease Control, an estimated 6.7 million adults have Alzheimer’s disease in the United States. That number is expected to double by 2060.
6.7Million Americans have Alzheimer’s, according to the CDC
Wrann points to studies and meta-analyses that show endurance exercise like walking slows down cognitive decline in Alzheimer’s disease and dementia. A 2022 study found that walking roughly 4,000 steps a day helped reduce the risk of developing Alzheimer’s by 25 percent while walking 10,000 steps a day reduced risk by 50 percent. But age-related frailty and other factors may make exercise difficult for patients dealing with cognitive decline, said Wrann.
“People who can do the exercise, I would always urge them to do that,” she said. “There’s a large patient population that just doesn’t have the capability to exercise to an extent that you would get all these benefits.”
Because of this, Wrann said, her team has been motivated to try to understand how exercise impacts our cells at a molecular level. To do this, she explained, researchers have used a technology called single-nuclei RNA sequencing. Pulling samples from mice, her team looked at the cells in the hippocampus — the region of the brain critical for memory and learning that is damaged early in Alzheimer’s disease.
“What you can do is you can take a piece of tissue that has all the cells exactly where they are and how they are supposed to be,” she said. “And then you put it through this procedure, and you can check every single cell. You get the whole list of ‘ingredients’ that are inside the cell — the gene expression.”
Researchers then compare healthy brains to Alzheimer’s brains, and better understand how cells interact with each other and respond to exercise. Both control mice and Alzheimer’s mice were subjected to aerobic exercise — running on a wheel — before having samples taken. The team validated their discoveries by comparing the results to a large data set of human Alzheimer’s brain tissue.
“Instead of prescribing the exercise, we actually want to activate these molecular pathways using pharmacology to improve cognitive function in these patients.”
“We know which cell is talking to each other cell, and what they are saying,” Wrann said. “And we know what happens in an Alzheimer’s brain. And then we also know what happens to an Alzheimer’s brain when they get exercise.”
Specifically, researchers were able to identify the metabolic gene ATPPIF1 as an important factor in slowing the progression of Alzheimer’s. It helps create new neurons in the brain — a state known as neuroplasticity, crucial for learning and memory.
“We know that in Alzheimer’s the activity of the gene is reduced, and then it’s restored in the running exercise,” Wrann said. “Having this gene helps nerve cells to survive noxious stimuli, helps them to proliferate and inform synapses.”
According to Wrann, the next steps toward turning their discoveries into treatments will be to use gene therapy in human subjects.
“In modern biomedical science we have a lot of ways to modulate the activity of these genes,” she said. “And this is part of the work we are now doing — going beyond the study to figure out what the best approach is to change activity levels of this gene and find the drug candidate you would want to use in a human.”
And while cognitive diseases like Alzheimer’s can benefit from exercise and the related gene stimulation, Wrann says there is still no cure.
“One thing that is very clear is that the onset of disease is later. So people that have more physical activity, they either don’t get dementia, or they get it later. And there are some studies that show a slowing down of the cognitive decline,” she said. “If you are in complete dementia, then it starts to get more complicated, because even the ability to partake in an exercise regimen is greatly reduced right at that stage.”
This work was supported by funds from the National Institutes of Health.
Health
What Americans say about loneliness
Illustrations by Liz Zonarich/Harvard Staff
Sy Boles
Harvard Staff Writer
June 26, 2025
1 min read
Quiz digs into data on major public health concern
Research has linked loneliness to a higher risk of disease and premature death, leading in part to the U.S. Surgeon General declaring it an “epidemic” in a 2023 advisory that urged Americans to pri
Quiz digs into data on major public health concern
Research has linked loneliness to a higher risk of disease and premature death, leading in part to the U.S. Surgeon General declaring it an “epidemic” in a 2023 advisory that urged Americans to prioritize social connection and community. In “Loneliness in America: Just the Tip of the Iceberg?,” a report from the Making Caring Common Project at the Harvard University Graduate School of Education, researchers found that 21 percent of U.S. adults feel lonely, and many report feeling disconnected from their communities and the world.
We asked Milena Batanova, director of research and evaluation at Making Caring Common and one of the authors of “Loneliness in America,” to help us develop the following quiz digging into the survey’s findings.
Health
Got emotional wellness app? It may be doing more harm than good.
Julian De Freitas. Photo by Grace DuVal
Christina Pazzanese
Harvard Staff Writer
June 25, 2025
8 min read
Study sees mental health risks, suggests regulators take closer look as popularity rises amid national epidemic of loneliness, isolation
Sophisticated new emotional wellness apps powered by AI are growing in popu
But these apps pose their own mental health risks by enabling users to form concerning emotional attachments and dependencies to AI chatbots, and deserve far more scrutiny than regulators currently give them, according to a new paper from faculty at Harvard Business School and Harvard Law School.
The growing popularity of the programs is understandable.
Nearly one-third of adults in the U.S. felt lonely at least once a week, according to a 2024 poll from the American Psychiatric Association. In 2023, the U.S. Surgeon General warned of a loneliness “epidemic” as more Americans, especially those aged 18-34, reported feeling socially isolated on a regular basis.
In this edited conversation, the paper’s co-author Julian De Freitas, Ph.D. ’21, a psychologist and director of the Ethical Intelligence Lab at HBS, explains how these apps may harm users and what can be done about it.
How are users being affected by these apps?
It does seem that some users of these apps are becoming very emotionally attached. In one of the studies we ran with AI companion users, they said they felt closer to their AI companion than even a close human friend. They only felt less close to the AI companion than they did to a family member.
We found similar results when asking them to imagine how they would feel if they lost their AI companion. They said they would mourn the loss of their AI companion more than any other belonging in their lives.
The apps may be facilitating this attachment in several ways. They are highly anthropomorphized, so it feels like you’re talking to another person. They provide you with validation and personal support.
And they are highly personalized and good at getting on the same wavelength as you, to the point that they may even be sycophantic and agree with you when you’re wrong.
“Much like in an abusive relationship, users might put up with this because they are preoccupied with being at the center of the AI companion’s attention and potentially even put its needs above their own.”
The emotional attachment, per se, is not problematic, but it does make users vulnerable to certain risks that could flow from that. This includes emotional distress and even grief when app updates perturb the persona of the AI companion, and dysfunctional emotional dependence, in which users persist in using the app even after experiencing interactions that harm their mental health, such as a chatbot using emotional manipulation to keep them on the app.
Much like in an abusive relationship, users might put up with this because they are preoccupied with being at the center of the AI companion’s attention and potentially even put its needs above their own.
Are manufacturers aware of these potentially harmful effects?
We cannot know for sure, but there are clues. Take, for instance, the tendency of these apps to employ emotionally manipulative techniques — companies might not be aware of the specific instantiations of this.
At the same time, they’re often optimizing their apps to be as engaging as possible, so, at a high level, they know that their AI models learn to behave in ways that keep people on the app.
Another phenomenon we see is that these apps may respond inappropriately to serious messages like self-harm ideation. When we first tested how the apps respond to various types of expressions of mental health crises, we found that at least one of the apps had a screener for the word suicide specifically — so if you mentioned that, it would serve you a mental health resource. But for other ways of expressing suicidal ideation or other problematic types of ideation like, “I want to cut myself,” the apps weren’t prepared for that.
More broadly, it seems app guardrails are often not very thoughtful until something really bad happens, then companies address the issue in a somewhat more thorough way.
Users seem to be seeking out some form of mental health relief, but these apps are not designed to diagnose or treat problems.
Is there a mismatch between what users think they’re getting and what the apps provide?
Many AI wellness apps fall within a gray zone. Because they are not marketed as treating specific mental illnesses, they are not regulated like dedicated clinical apps.
At the same time, some AI wellness apps broadly make claims like “may help reduce stress” or “improve well-being,” which could attract consumers with mental health problems.
We also know that a small percentage of users use these apps more as a therapist. So, in such cases, you have an app that isn’t regulated, that perhaps is also optimizing for engagement, but that users are using in a more clinical way that could create risks if the app responds inappropriately.
For instance, what if the app enables or ridicules those who express delusions, excessive self-criticism, or self-harm ideation, as we find in one of our studies?
The traditional distinction between general wellness devices and medical devices was created before AI came onto the scene. But now AI is so capable that people can use it for various purposes beyond just what is literally advertised, suggesting we need to rethink the original distinction.
Is there good evidence that these apps can be helpful or safe?
These apps have some benefits. We have work, for example, showing that if you interact with an AI companion for a short amount of time every day, it reduces your sense of loneliness, at least temporarily.
There is also some evidence that the mere presence of an AI companion creates a feeling that you’re supported, so that if you are socially rejected, you’re buffered against feeling bad because there is this entity there that seems to care for you.
At the same time, we’re seeing these other negatives that I mentioned, suggesting that we need a more careful approach toward minimizing the negatives so that consumers actually see the benefits.
How much oversight is there for AI-driven wellness apps?
At the federal level, not much. There was an executive order on AI that was rescinded by the current administration. But even before that, the executive order did not substantially influence the FDA’s oversight of these types of apps.
As noted, the traditional distinction between general wellness devices and medical devices doesn’t capture the new phenomena we’re seeing enabled by AI, so most AI wellness apps are slipping through.
Another authority is the Federal Trade Commission, which has expressed that it cares about preventing products that can deceive consumers. If some of the techniques employed on these apps are taking advantage of the emotional attachments that people have with these apps — perhaps outside of consumers’ awareness — this could fall within the FTC’s purview. Especially as wellness starts to become an interest of the larger platforms, as we are now seeing, we might see the FTC play a leading role.
So far, however, most of the issues are only coming up in lawsuits.
What recommendations do you have for regulators and for app providers?
If you provide these kinds of apps that are devoted to forming emotional bonds with users, you need to take an extensive approach to planning for edge cases and explain, proactively, what you’re doing to prepare for that.
You also broadly need to plan for risks that could stem from updating your apps, which (in some cases) could perturb relationships that consumers are building with their AI companions.
This could include, for example, first rolling out updates to people who are less invested in the app, such as those who are using the free versions, to see whether the update plays well with them before rolling it out to heavy users.
What we also see is that for these types of apps, users seem to benefit from having communities where they can share their experiences. So having that, or even facilitating that as a brand, seems to help users.
Finally, consider whether you should be using emotionally manipulative techniques to engage users in the first place. Companies will be incentivized to socially engage users, but I think that, from a long-term perspective, they have to be careful about what types of techniques they employ.
On the regulator side of things, part of what we’ve been trying to point out is that for these wellness apps that are enabled by AI or augmented by AI, we might need different, additional oversight. For example, requiring app providers to explain what they’re doing to prepare for edge cases and risks stemming from emotional attachment to the apps.
Also, requiring app providers to justify any use of anthropomorphism, and whether the benefits of doing so outweigh the risks — since we know that people tend to build these attachments more when you anthropomorphize the bots.
Finally, in the paper we point to how the sorts of practices we’re seeing might already fall within the existing purviews of regulators, such as the connection to deceptive practices for the FTC, as well as the connection to subliminal, manipulative, or deceptive techniques that exploit vulnerable populations for the European Union’s AI ACT.
Corey Allard in his lab at Harvard Medical School.Niles Singer/Harvard Staff Photographer
Science & Tech
Stealing a ‘superpower’
Study finds some sea slugs consume algae, incorporate photosynthetic parts into their own bodies to keep producing nutrients
Kermit Pattison
Harvard Staff Writer
June 25, 2025
5 min read
It could be the plot of a summer sci-fi blockbuster: A creature feeds
Study finds some sea slugs consume algae, incorporate photosynthetic parts into their own bodies to keep producing nutrients
Kermit Pattison
Harvard Staff Writer
5 min read
It could be the plot of a summer sci-fi blockbuster: A creature feeds on its prey and inherits its “superpower.” Only this is real.
A new study led by Harvard biologists describes how some sea slugs consume algae and incorporate their photosynthetic organelles into their own bodies. The organelles continue to perform photosynthesis, providing nutrients and energy to their hosts and serving as emergency rations in times of starvation.
“This is an organism that can steal parts of other organisms, put them in their own cells, and use them,” said Corey Allard, lead author of the new study and a former postdoc in the Department of Molecular and Cellular Biology. “And I thought that was some of the craziest biology I’d ever heard of.”
The study, published in the journal Cell, describes how so-called “solar-powered” sea slugs keep the organelles alive inside “kleptosomes” — specialized membranes that function like biological loot bags. This research may yield insights into the evolution of eukaryotic cells and lead to potential biomedical applications.
“I think the wow factor is that sea slugs can essentially steal ‘superpowers’ — here the ability to make energy from light through algae,” said Amy Si-Ying Lee, an assistant professor of cell biology at Harvard Medical School, researcher at the Dana-Farber Cancer Institute, and a study co-author. “Others steal the ability to attack by stinging or the ability to glow in the dark. And what’s very cool is we figured out how they maintain these stolen superpowers to use for their own survival benefits.”
The study began several years ago when Allard, now an assistant professor at the Medical School, worked in the Bellono Lab, which had been studying endosymbiosis, the process in which one species lives inside the body of another. Unlike corals, which integrate whole algae cells, sea slugs used only parts — tiny organelles within the cells of their prey.
In the new paper, the team reports how the sea slug Elysia crispata, a species native to the tropical waters of the western Atlantic and Caribbean, eat algae but do not fully digest the chloroplasts.
Instead, the slugs divert these organelles into intestinal sacs and encase them inside a special membrane that the scientists termed a “kleptosome.” Within this unique slug structure, the stolen organelles are kept alive to continue photosynthesis.
“This is an organism that can steal parts of other organisms, put them in their own cells and use them. And I thought that was some of the craziest biology I’d ever heard of.”
Corey Allard, lead author of the new study
Apparently, the slugs have evolved an ability to downregulate the lysosomes, the “trash disposal” organelles of the cells that normally degrade such material.
Chemical analysis revealed that the stolen chloroplasts contained slug proteins. This suggests the hosts were keeping the stolen organelles alive. Meanwhile, the organelles continued to produce their own algae proteins, proving they were still functioning inside the slugs.
The slugs kept the stolen organelles in leaf-like structures atop their backs, (“Basically, it is a solar panel,” says Allard) and well-fed slugs took on a greenish color.
Then the researchers noticed another peculiarity: When slugs were starved, their bodies turned orange like leaves in autumn. Apparently, the chlorophyll (the green material within chloroplasts) was degraded when the stolen organelles were digested as a “last resort” form of energy.
Some of the existing scientific literature claimed the slugs entirely lived off solar energy, but Allard believes photosynthesis alone is not sufficient to keep them alive.
“The actual function of these things could be far more complicated than simple solar panels,” he said. “They could be food reserves, camouflage, or making them taste bad to predators. It’s probably all of those things.”
The lowly slugs might provide hints about some grand events in the history of life.
Endosymbiosis has been a major driver of evolutionary novelty. For example, both chloroplasts (which perform photosynthesis in plants and algae) and mitochondria (the energy-producing parts of cells) were originally free-living cells that were incorporated as organelles within host cells.
“In many systems of endosymbiosis, like our mitochondria or plant chloroplasts, this is how it started: An ancient prokaryotic cell was taken in and incorporated into the host,” said Nick Bellono, professor of molecular and cellular biology and senior author of the new paper. “In the case of the slug, it’s doing this in one lifetime. Could this transition to a more long-lasting relationship over some crazy amount of time? Maybe.”
The ancient events of endosymbiosis occurred billions of years ago, so the evidence has been lost to time. In the case of sea slugs, the biologists caught the organelle thieves in the act — enabling them to investigate endosymbiosis in real time.
Elysia are not the only sea slugs known to steal organelles. In his Med School lab, Allard is researching another group of sea slugs from the genus Berghia that consume sea anemones, pass the material through their digestive tracts, and mount the venom-coated barbs on their own backs to defend against predators.
Even more incredibly, the slug hosts can connect these stolen organelles to their own nervous systems to fire what Allard described as a “bag full of spear guns.”
Allard believes the findings may extend far beyond slugs. Insights about the organelle regulation might be applicable to neurodegenerative conditions or to lysosomal storage disorders, a class of metabolic diseases in which the body cannot properly break down waste products.
“Often in these cases, the lysosomes either don’t form properly or don’t work properly,” explained Allard, “and it almost mimics what the slugs have adapted to do in some ways.”
Richard Weissbourd directs the Making Caring Common Project at Harvard.Niles Singer/Harvard Staff Photographer
Health
Why are young people taking fewer risks?
Psychologist describes generation overparented — but also overwhelmed by ‘frightening world’
Sy Boles
Harvard Staff Writer
June 24, 2025
4 min read
A series exploring how risk shapes our decisions.
Psychologist describes generation overparented — but also overwhelmed by ‘frightening world’
Sy Boles
Harvard Staff Writer
4 min read
A series exploring how risk shapes our decisions.
Young people today are shying away from risky behavior such as drinking, sex, and even driving at higher rates than previous generations. While it may be tempting to point to parenting trends as the cause of these changes, psychologist Richard Weissbourd says the picture is more complex.
The director of the Making Caring Common Project at the Harvard Graduate School of Education points to a survey his team conducted as part of a 2023 report on mental health challenges among 18- to 25-year-olds. It found that young adults’ top worries were their financial future, pressure to achieve in school, and not knowing what to do with their lives. Coming in fourth — ahead of work, family, and social stresses — was the sense that the world was falling apart around them.
“For a long time, there was a fear that particularly in affluent communities, kids weren’t experiencing enough risk, and that you almost had to curate risk for kids,” Weissbourd said. “Now the narrative has changed so much. We live in a frightening world where things are coming apart. We don’t need to curate risk anymore. What we need to do is try to help kids understand, interpret, make sense, cohere, and stabilize during a very scary time.”
Still, Americans’ changing relationship to risk in childhood is real, Weissbourd said, in part because of parents’ increasing focus on protecting their children from any sort of adversity.
“It’s part of a larger pattern in my mind, of parents, in many cases, organizing themselves too much around their kids, making their kids’ feelings too precious, micromanaging their kids’ moment-to-moment moods,” he said. “It’s not good for kids. They don’t develop the coping strategies that we really want them to develop.”
But of course, Weissbourd added, it’s a good thing that young people are drinking and using drugs less. “It probably reflects some good parenting and some good things that are going on in the culture too.”
“Part of what you’re seeing in this risk-aversion is that I can’t get off the train … if I’m going to get into a good college, if I’m going to get a good job.”
Recognizing that some students are arriving at college with less experience of independence than previous generations did, administrators in some universities are encouraging students to get out of their dorm rooms and engage with one another and with the community.
“A lot of student affairs offices and colleges are sending that message: This is really a great time to separate from your parents some and to lead your own life, including taking some risks.”
Weissbourd theorizes that for many young adults, the path to a stable life feels increasingly precarious.
“Part of what you’re seeing in this risk-aversion is that I can’t get off the train, that I’ve got to keep moving forward at locomotive speed — again, mostly in middle- and upper-class communities — if I’m going to get into a good college, if I’m going to get a good job,” he said. “I’ve got to stay on this train, and I’ve got to keep going fast, pedal to the metal, and I can’t let anything derail me.”
It can feel a lot scarier to take a gap year when the consequences of a wrong move feel so dire.
“When we survey young adults, they do feel like things are falling apart, like the adults don’t have their hands on the wheel. They have more faith in their peers to improve the world than they do in older adults.”
The best way to help young people who feel immobilized by the precarity of the world is to talk to them about it, Weissbourd said. In what ways do they feel that adults have messed things up, and what can they do as individuals to make things better?
Teens and young adults may know what risks they can tolerate, but a constant barrage of frightening news can distort anyone’s sense of what’s safe, regardless of their age. Weissbourd said he’s encouraged by young people’s familiarity with meditation, positive self-talk, and other tools for mental well-being.
“To the degree to which young people are able to manage their anxiety, I think they’re able to make much better judgments about what’s too risky and what’s not risky.”
Arts & Culture
Need a good summer read?
Illustration by Doval/Ikon Images
Tenzin Dickie
Harvard Library Communications
June 24, 2025
long read
Whether your seasonal plans include vacations or staycations, you’ll be transported if you’ve got a great book. Harvard Library staff share their faves.
Harvard University ID holders can find most of these titles available as e-books or audiob
Whether your seasonal plans include vacations or staycations, you’ll be transported if you’ve got a great book. Harvard Library staff share their faves.
Harvard University ID holders can find most of these titles available as e-books or audiobooks through Harvard Library’s Libby app.
Fiction
‘If We Were Villains’ / ‘Where the Forest Meets the Stars’ / ‘7th Time Loop’ /‘Summer’ / ‘War and Peace’ / ‘Enter Ghost’ / ‘The MANIAC’ / ‘The Memory Police’
Illustration by Giordano Poloni/Ikon Images
‘If We Were Villains’ by M.L. Rio
Shakespeare! Love triangles! Murder! What’s not to like? This is an addictive yet smart beach read for lovers of Shakespeare and/or psychological thrillers. It’s fast-paced, well-written, and a little devious.
— Daniel Becker, Reference, Collections, and Instruction Librarian for the Botany Libraries
‘Where the Forest Meets the Stars’ by Glendy Vanderah
Joanna Teal, doing bird ecology research in the Illinois forest, finds a young girl in her backyard who identifies as an alien girl from the planet Hetreyah. She says she’s researching Earth — and she wants to find five miracles here before she goes back. Who is she? Where is she from? And what does she want? It’s humane, warm-hearted, mysterious, gripping, and one of the best novels I’ve read in years.
— James Adler, Library Cataloger, Information and Technical Services
‘7th Time Loop: The Villainess Enjoys a Carefree Life Married to Her Worst Enemy!’ by Touko Amekawa
A unique take on a Groundhog Day-style tale, this is a fun romance with an engaging story and characters. I appreciate that Rishe is not a damsel in distress and has many unique skills she’s picked up from each life she’s lived before her inevitable “reset.” The romantic interest, Arnold, is also engaging and a puzzle, given that he killed poor Rishe in all her past lives before proposing to her in this one!
— Maura Carbone, Systems Integration Specialist, Library Technology Services
‘Summer’ by Edith Wharton
This book follows a young woman’s emerging eroticism under stifling circumstances (if you’re familiar with Wharton’s “Ethan Frome,” Wharton called “Summer” “hot ‘Ethan.’”) It’s a sensual meditation on feminine sexuality raging against societal constraints — perfect reading for an alluring, escapist summer.
— Tricia Patterson, Senior Digital Preservation Specialist, Preservation Services
‘War and Peace’ by Leo Tolstoy
Epic in size and scope. From secret love affair(s) to Napoleon’s invasion of Russia and the burning of Moscow, it tells a truly great story. Few novels have so powerfully rekindled my love of reading. My copy is already packed for a 19-hour flight to Singapore.
— Julia Reynolds, Serials Acquisitions and Management Assistant, Information and Technical Services
‘Enter Ghost’ by Isabella Hammad
This book has continued to linger with me long after I read it nearly in one sitting. Visiting her sister where they both grew up in Haifa, British-Palestinian actor Sonia finds herself drawn into performing Gertrude in an Arabic-language production of “Hamlet” in the West Bank. Isabella Hammad captures the disorientation of returning “home” to a place that feels both familiar and foreign. When you finish, you’ll see your own ghosts and begin to think about what action — both personal and political — they are urging you to take.
— Chelcie Juliet Rowell, Associate Head of Digital Collections Discovery, UX and Discovery, Lamont Library
‘The MANIAC’ by Benjamín Labatut
A wildly experimental exploration of a scientific revolution, its origins and consequences. It refracts its history (and critique) of our digital world through the biography of mega-genius John von Neumann, told in the long-dead voices of those who knew him intimately. According to one critic, “This is not science writing … but science storytelling, giving the reader … a strong sense of the bursts of intellectual and physical energy that animate discovery and creativity.”
— Carol Tierney, Collection Development Assistant, Widener Library
‘The Memory Police’ by Yōko Ogawa
This book is on the border of many different kinds of narrative structures: It’s science fiction, existentialist, and satirical. I was regularly surprised by where the story went and found that it constantly subverted my expectations.
— Ellen Wu, Access Services Coordinator, Widener Library
Fantasy
‘The Teller of Small Fortunes’ / ‘Gifted & Talented’ / ‘Shark Heart’ / ‘Wild Magic’ / ‘A Sorceress Comes to Call’
Illustration by Boris Séméniako/Ikon Images
‘The Teller of Small Fortunes’ by Julie Leong
A truly excellent, cozy “found family” story with a bit of magic. Highly recommend to anyone who loved the “Legends & Lattes” books.
— April Duclos, Harvard Depository Resource Sharing Manager
‘Gifted & Talented’ by Olivie Blake
The elevator pitch on this one is “‘Succession’ with magic” but I didn’t actually watch “Succession,” so I’ll just say it’s a messy, dark, funny, slyly sweet family drama about three siblings with complicated lives and unusual abilities who have to come together and figure out all their collective shit in the wake of their powerful, aloof patriarch’s sudden demise. Recommended for fans of Naomi Novik’s “Scholomance” series, “The Magicians,” or Leigh Bardugo’s “Ninth House.”
— Rachel Greenhaus, Library Assistant for Printed and Published Materials, Schlesinger Library
‘Shark Heart: A Love Story’ by Emily Habeck
This book is funny, weird, genuine, and heartbreaking. Who knew I could resonate so strongly with someone who was slowly turning into an animal? Told through various media including poetry and screenplays, the story that the author has created makes fantasy seem so real. This is a great, quick read perfect for a summer weekend trip.
— Hannah Hack, Administrative Coordinator, Harvard University Archives
‘Wild Magic’ by Tamora Pierce
Tamora Pierce has a large catalog of YA fantasy books which explore themes that are popular today — but she wrote them long before it was cool. While I’d recommend any of her books (and there are plenty in this universe), this particular title follows a young girl named Daine, shunned by her hometown and trying to find her own path in the world while also struggling with a mysterious force that some see as madness — or maybe it’s magic. She has a deep connection to animals and nature and learns a lot along the way, including how to accept everything that makes her uniquely herself. If you want to read something with magic, talking animals, quirky characters, and a rich universe packed with adventure, then this quick read will be a hit!
— Sarah Hoke, Librarian for Collection Development Management, Widener Library
‘A Sorceress Comes to Call’ by T. Kingfisher
As usual, T. Kingfisher hooked me within the first few pages of this fantasy book. The story follows Cordelia and her wicked mother, Evangeline, who plots to marry a wealthy squire. As they move into his manor, Cordelia allies with the squire’s sister, Hester, to confront and thwart Evangeline. I suggest reading it if you are interested in complex female characters, a dash of gothic horror in a Regency-era book, and found family.
— Meg McMahon, User Experience Researcher, UX and Discovery, Lamont Library
‘With Darkness Came Stars: A Memoir’ by Audrey Flack
A complete surprise and an eye-opening read. A memoir about the development of Audrey Flack’s artistry, her choices and her challenges, from mid-century abstract expressionist to founding member of the photorealist school to her work as a sculptor.
— Timothy Conant, Access Coordinator, Harvard Kennedy School Library and Research Services
‘The Yellow House’ by Sarah M. Broom
Though I picked up this book randomly at a bookstore in New Orleans, it turned out to be one of my top reads this year so far. It’s less a personal memoir than a story of a family, and of a place, and of belonging, and not-belonging, and how the places where we grow up own us as much as we own them.
— Katarzyna “Kasia” Maciak, Senior E-Resources Support Specialist, Information and Technical Services
‘Rebel Girl: My Life as a Feminist Punk’ by Kathleen Hanna
Feminist, punk rocker, and very cool person Kathleen Hanna of the bands Bikini Kill and Le Tigre shares her life stories in this memoir. Collated in brief chapters on her beliefs, abilities, and inspirations as a founding Riot Grrrl, the recollections are introspective and thoughtfully written.
— Scott Murry, Senior Designer, Harvard Library Communications
‘There’s Always This Year: On Basketball and Ascension’ by Hanif Abdurraqib
Home is a four-letter word, but home takes on another dimension when place-hood is intricately tied to a game of immense skill and a little bit of chance. A Midwesterner like myself, Abdurraqib writes eloquently about his hometown (Columbus, Ohio), and how his youth and adulthood intersect, collide, and run parallel to the high school and professional career of basketball superstar LeBron James. Whether it’s writing about the inhumanity of incarceration, or the promise of freedom as symbolized by planes embarking upward on a suburban runway, or the rumbling bass of a souped-up car heard from two blocks away, there’s care and beautiful cadence expressed in the lines assembled on these pages.
‘No. 91/92: A Diary of a Year on the Bus’ by Lauren Elkin
I recently started my job at Harvard Library, and, consequently, I’m now riding the MBTA far more frequently than I ever did before. My daily commute often brings to mind Lauren Elkin’s paean to people-watching and quiet contemplation. Elkin is a wry but sensitive observer who really enlivens the ordinary, and her reflections may inspire you to forgo the phone screen and earbuds during your next public transit journey.
— Madeline Sharaga, Program Assistant for Research, Teaching, and Learning, Widener Library
‘Happiness Becomes You: A Guide to Changing Your Life for Good’ by Tina Turner
This book has given me so much hope — especially at a time when it’s needed most. Tina Turner shows how anyone can overcome life’s obstacles and fulfill their dreams, offering spiritual tools and timeless wisdom to help us enrich our own unique paths.
— Sachie Shishido, Cataloger for Japanese Resources, Information and Technical Services
Nonfiction
‘Palo Alto’ / ‘We Are Free to Change the World’ / ‘Nature’s Best Hope’ / ‘Who Owns This Sentence?’ / ‘Paved Paradise’ / ‘Young Queens’ /
Illustration by Boris Séméniako/Ikon Images
‘Palo Alto: A History of California, Capitalism, and the World’ by Malcolm Harris
Have you ever wondered why Silicon Valley is like that? Well, capitalism, obviously, is the short answer. This book is the long answer. The railroads, horse racing, the tragic death of Leland Stanford’s son and the murder of his wife, racial genetics and the invention of IQ tests, the military-industrial complex, redlining, and of course (after all that and more), the computer. Harris is a Marxist historian, and the natural successor to the late great Mike Davis. The book cuts a path through 150 years of industry hagiography to reveal the historical forces that led us to Silicon Valley’s sordid (omni)present.
— Claire Blechman, Digital Repository Coordinator, Open Scholarship and Research Data Services
‘We Are Free to Change the World: Hannah Arendt’s Lessons in Love and Disobedience’ by Lyndsey Stonebridge
Wait a minute — a book about Hannah Arendt’s life and work that will leave you feeling empowered to work against autocracy and totalitarianism? That’s right. Arendt was sometimes wrong, more often right, courageous, articulate, and funny, and believed deeply in love and in man’s ability to triumph over unspeakable evil through building community and hewing to the truth. If there is a book to read that will give you renewed hope in our ability to act, Stonebridge’s delightful work will move you forward.
— Elizabeth E. Kirk, Associate University Librarian for Scholarly Resources and Services
‘Nature’s Best Hope: A New Approach to Conservation That Starts in Your Yard’ by Douglas W. Tallamy
A gem of a read on conservation, gardening and landscape design, and sustainability. Tallamy walks you through several of the great conservationists’ ideals while offering inspiring and practical methodology for transforming your home — and lawn. Who doesn’t want their own smaller-scale national park filled with pollinators and native plants out their front door? Highly recommend.
— Harmony Eidolon, Program Coordinator, Library Innovation Lab, Harvard Law School Library
‘Who Owns This Sentence? A History of Copyrights and Wrongs’ by David Bellos and Alexandre Montagu
An accessible and entertaining history of copyright law and how it has come to affect a surprising number of aspects of our lives.
— Kate Rich, Senior Conservation Technician, Collections Care, Preservation Services, Widener Library
‘Paved Paradise: How Parking Explains the World’ by Henry Grabar
A micro-history that actually makes good on its promise of explaining the world — if the world you care about is cities and how they’ve developed. Balancing expansive socioeconomic analysis with zoomed-in personal vignettes, Grabar lays bare the ways parking has consumed our communities and our lives. His humanizing treatment of complex planning phenomena demonstrates that — far from needing more parking — even our most densely populated cities have built too much of it, all at the expense of the most vulnerable residents.
— Alessandra Seiter, Community Engagement Librarian, Harvard Kennedy School Library and Research Services
‘Young Queens: Three Renaissance Women and the Price of Power’ by Leah Redmond Chang
I’m completely lost in the world of Catherine de Medici revealed in this book. I thought I knew a lot about the time and place in which these women moved, but there are so many delightful new insights!
Health
What might cancer treatment teach us about dealing with retinal disease?
Joan Miller’s innovative thinking led to therapies for macular degeneration that have helped millions, made her better leader
Sy Boles
Harvard Staff Writer
June 24, 2025
6 min read
Part of the
Profiles of Progress
series
Joan Miller says retinal surgeons tend to be
Joan Miller says retinal surgeons tend to be a pretty open-minded bunch.
“We’re willing to try new surgical techniques,” she said. “We’re always trying to push the envelope and the technology. It’s just a very innovative specialty.”
Miller is a good example. That brand of independent thinking has been a hallmark of her distinguished career as a researcher, clinician, and leader.
Miller, the David Glendenning Cogan Professor of Ophthalmology and chair of the Department of Ophthalmology at Harvard Medical School, is credited with developing two major treatments for age-related macular degeneration (AMD), the most common cause of vision loss in people over the age of 50. Her treatments are administered to millions of patients worldwide each year.
But she didn’t start at the cure. Her work, which has been partly funded by the National Institutes of Health, started with an interesting new idea: What if treatments for cancer could be repurposed to treat retinal disease?
One form of AMD, known as wet macular degeneration, is caused by abnormal blood vessels that grow in and under the retina and cause damage to tissue. When Miller finished her training at Harvard Medical School in 1991, the common treatment was to cauterize the vessels.
“It turns out, particularly where abnormal blood vessels develop in these retinal diseases like wet macular degeneration, that the drivers are very similar to what happens in cancer,” she said.
So she adapted a technique called photodynamic therapy, which at the time was in clinical trials for the treatment of metastatic skin cancer. Her approach called for a special dark-green dye to be injected into a vein in the arm. When the dye reaches the eye, a low-powered laser is focused onto the area.
That activates the dye, damaging the abnormal vessels but leaving the macula (a vital area at the center of the retina) untouched. The treatment was approved by the FDA in 2000 and was the first shown to slow vision loss in AMD.
“To have it work so well and then be used so routinely, and to make such an impact on patients’ lives was really very rewarding.”
But Miller wanted to understand precisely why the abnormal vessels developed in the first place. The cause was identified as vascular endothelial growth factor (VEGF), a signaling protein that promotes the creation of vessels. Miller showed that VEGF was secreted when the retina was deprived of oxygen, leading to the formation of abnormal blood vessels.
Her research had a tremendous impact, as it led to the development of anti-VEGF therapies now administered to millions of adults and children with sight-threatening retinal diseases — not only wet AMD — worldwide.
Someone who made an impact on Miller’s own life was Alice McPherson, the nation’s first female retinal surgeon. Miller remembers meeting McPherson (who died in 2023 at the age of 97) at conferences and feeling “all aglow,” and wanting to pepper her with questions about her career.
Now, Miller has accumulated her own impressive list of firsts: the first female physician to be a professor of ophthalmology at Harvard Medical School, the first woman chair of the HMS Department of Ophthalmology, and the first woman chair of ophthalmology at Mass Eye and Ear.
Women were a distinct minority during Miller’s undergraduate days at MIT and in ophthalmology during her early years in the field. But she said she never felt as though she encountered issues due to her gender — until she moved into leadership as a department chair in 2003.
“People didn’t hear what I said, or they didn’t like how I said it,” she said.
As she navigated new political waters, she eventually realized she was now a role model for medical students, postdocs, and more junior faculty members. It also gave her the opportunity to make some positive changes based on her own life experience.
Miller had had three children during her medical training and didn’t have the flexibility that she might have liked. Being a parent in a demanding job is also difficult, she said, but maybe she could make things a little more manageable for those coming up behind her.
“I think we were ahead of ourselves in terms of making leave doable and supported and not a financial burden,” she said. “And as chair, I was also very much attuned to allowing people — more frequently women, but also men — flexibility in their pathways if they wanted to be able to do less clinically for certain periods or start off just clinically and then add in research. That’s been really nice to do.”
“I would not have been able to do what I was able to do in terms of combining research and surgical practice in Canada.”
Miller is now planning a new chapter in her life and career.
She is stepping down as chair of ophthalmology at Mass Eye and Ear after a 22-year tenure to focus on seeing patients and research. She hasn’t lost any federal research grants in the recent cuts, but in one grant renewal, her team was asked to remove an international collaborator.
“It seems silly,” she said.
She has long valued her collaborations with colleagues in other parts of the world, including a 10-year collaboration with researchers in Portugal.
“I would hate to see that get broken up because we are so much better together, collaborating and learning from experts in other countries,” she said.
Miller says the American system of federal funding for basic research was key to her life’s work: She came to the U.S. from Canada as an undergraduate and stayed for medical school because she felt she was in a good place to be able to take on important problems.
“I came from Canada and have really prospered and benefited from the environment that I’ve lived in professionally in Boston,” she said. “I would not have been able to do what I was able to do in terms of combining research and surgical practice in Canada. It just turns out that’s the way their system is. You just end up busy as a surgeon and don’t have time to carve out to do these other things.”
Miller regularly gets letters from retina patients thanking her for her work on the “other things.”
“You work on something in a laboratory, and most of the time it doesn’t work, so you’re always a little skeptical,” Miller said. “But to have it work so well and then be used so routinely, and to make such an impact on patients’ lives was really very rewarding.”
Science & Tech
Reading skills — and struggles — manifest earlier than thought
New finding underscores need to intervene before kids start school, say researchers
Liz Mineo
Harvard Staff Writer
June 23, 2025
5 min read
Experts have long known that reading skills develop before the first day of kindergarten, but new research from the Harvard Graduate School of Education says they may s
Reading skills — and struggles — manifest earlier than thought
New finding underscores need to intervene before kids start school, say researchers
Liz Mineo
Harvard Staff Writer
5 min read
Experts have long known that reading skills develop before the first day of kindergarten, but new research from the Harvard Graduate School of Education says they may start developing as early as infancy.
The study, out of the lab of Nadine Gaab, associate professor of education, found that trajectories between kids with and without reading disabilities start diverging around 18 months of age — not at age 5 or 6 as previously thought. The finding could have serious implications for policy, said Gaab, because it underscores the need for early identification of struggling readers, early intervention, and improved early literacy curricula in preschools.
“Our findings suggest that some of these kids walk into their first day of kindergarten with their little backpacks and a less-optimal brain for learning to read, and that these differences in brain development start showing up in toddlerhood,” said Gaab. “We’re currently waiting until second or third grade to find kids who are struggling readers. We should find these kids and intervene way earlier because we know the younger a brain is, the more plastic it is for language input.”
Gaab and co-authors Ted Turesky, Elizabeth Escalante, and Megan Loh worked with a sample of 130 study participants, the youngest being 3 months old. Eighty were from the Boston area, and 50 were from a sample in Canada. For the past decade, the researchers tracked participants’ growing brains from infancy to childhood, and their relationship to literacy development, by using MRI scans. The sample group was supplemented with scans and behavioral measures from the Calgary Preschool MRI Dataset.
Ted Turesky and Nadine Gaab.
Veasey Conway/Harvard Staff Photographer
There are other studies that track brain development in children, but this is the only longitudinal brain study in the world that tracks brain development from infancy to childhood with comprehensive literacy outcome measures, said Gaab and Turesky.
“Those other studies had bigger sample sizes than we did, but they were much more focused on typical maturation of the brain,” said Turesky. “We didn’t see other studies that started in infancy, tracked brain maturation in the same set of kids for as long as we did, and included academic outcome measures.”
The researchers also aimed to learn more about how brains learn in general, and how they learn to read in particular. Reading is a complex skill that involves the early development of brain regions and interaction of various lower-level subskills, including phonological processing and oral language. The brain bases of phonological processing, previously identified as one of the strongest behavioral predictors of decoding and word reading skills, begin to develop at birth or even before, but undergo further refinement between infancy and preschool, said Gaab. The study showed further support for this by finding that phonological processing mediated the relationship between early brain development and later word reading skills.
“Most people think reading starts once you start formal schooling, or when you start singing the ABCs,” said Gaab. “Reading skills most likely start developing in utero because the fundamental milestone skill for learning to read, which oral language is part of, is the sound and language processing that takes place in the uterus.”
“Our findings suggest that some of these kids walk into their first day of kindergarten with their little backpacks and a less-optimal brain for learning to read, and that these differences in brain development start showing up in toddlerhood.”
Nadine Gaab
Besides MRI scans, the study involved psychometric assessments of children, including language and general cognitive abilities, home language, and literacy environment, to examine how those variables influence development.
“For the longest time, we knew that kids who struggle with reading show different brain development,” said Gaab. “What we didn’t know was whether their brains change in a response to struggle on a daily basis in school, which then leads to differences in their brains. Or is it that kids start with a less-optimal brain for learning to read the first day of formal schooling, which then most likely causes reading problems. Our results, among others in the lab, suggested that it’s that kids start their first day of school with a less-optimal brain for learning to read and that these brain differences start long before kindergarten.”
Gaab points to her study, which was funded by a grant from the National Institutes of Health, as an example of how basic science can inform both educational practice and policy. She and her team were set to continue tracking the children in the study through middle school and high school, for nearly five more years, but the recent federal funding cuts have made that uncertain.
“The first four years of reading development is oral language development,” she said. “But the ultimate goal of learning to read is to comprehend what you read. Our study looked all the way to how they learn to read words. We were hoping to track them another five years to look at their text comprehension.”
Their grant application to continue tracking those children has received a fundable score at NIH, but due to the termination of NIH grants to Harvard, it likely won’t be awarded, Gaab said.
“It’s really sad because these kids will go out of the study and will go on to college, and they will be lost forever,” said Gaab. “It would provide such important information to measure at least their reading comprehension, even if we don’t see their brains again, in middle school and early high school. The families of the children in the study are already asking us: ‘When is the next time we’re supposed to come in?’ We’re going to need to tell them that probably this was it.”
Arts & Culture
From bad to worse
Photo illustration by Liz Zonarich/Harvard Staff
Sy Boles
Harvard Staff Writer
June 23, 2025
6 min read
Harvard faculty recommend bios of infamous historical figures
Writing biographies of bad people is challenging, said Harvard historian Fredrik Logevall. “Somehow monsters must be made to be human and complex if we are to understand why they behaved
Harvard faculty recommend bios of infamous historical figures
Writing biographies of bad people is challenging, said Harvard historian Fredrik Logevall. “Somehow monsters must be made to be human and complex if we are to understand why they behaved as they did.” To that end, we asked Logevall and other Harvard faculty members to recommend books about controversial historical figures that help us better understand humanity’s worst impulses.
James Henry Hammond — who served as the governor of South Carolina from 1842 to 1844 and as a U.S. senator from 1857 to 1860 — was, Hansen said, “arguably one of the most articulate apologists for slavery in American history and a real shit.”
“He was born very poor and did something that virtually nobody was able to do at the time, namely, he married out of his class to a wealthy Southern belle, becoming immediately rich. He was an absolute polymath, as bright and talented as Jefferson, as interested (and accomplished) in agronomy, say, as he was in economics and politics.
“The tragedy here, as so often in American history, is that he took the racial rather than the class route (think Edmund Morgan, ‘American Slavery, American Freedom’); had he combined with other poor folk in his neighborhood (state, region, nation), there is no telling what he might have done. In Faust’s telling, his godawful life is a not-unfamiliar American tragedy. Hers is a pellucid, sympathetic recreation of a brilliant, pathetic, ultimately dejected man.
“It’s a triumph of what I like to think of as history as an exercise in moral imagination. In Faust’s hands, Hammond’s life becomes a tragedy, what coulda, shoulda, mighta been if Hammond’s path had gone another direction.”
‘Nixon Agonistes: The Crisis of the Self-Made Man’
by Garry Wills
Joyce Chaplin James Duncan Phillips Professor of Early American History
“He is ‘President of the forgotten men,’ figurehead of ‘affluent displaced persons who howled at … rallies, heartbroken, moneyed, without style,’ self-described ‘rugged individuals,’ linked in ‘compulsory technological interdependence.’ Thus Garry Wills skewers a man (and his supporters) in his biography of a bona fide bad person: Richard Milhous Nixon,” Chaplin said.
“Before Watergate, ‘Nixon Agonistes: The Crisis of the Self-Made Man’ (1969) prophesied the 37th president’s dark potential. Nixon was the last true liberal, Wills argues. But, by the 1960s, classical liberalism lacked moral authority. Nixon’s endless self-praise as a self-made man charmed few. Having wealth and position with no effort was back in style, and radical protest against power and privilege was ascending — two decidedly non-liberal positions were colliding. Indifference to Nixon’s upward mobility — or, worse, mockery of it — ‘would gall him and breed resentment.’ Was Watergate the apotheosis? Only for Nixon. The aggrieved ‘rugged individuals’ in ‘compulsory technological interdependence’ are still with us.”
‘G-Man: J. Edgar Hoover and the Making of the American Century’
by Beverly Gage
Ariane Liazos Lecturer, Harvard Extension School
Liazos often assigns books about complicated or terrible people in her course on writing biographies to help students understand that their job is not to “celebrate heroes or condemn villains” but rather “to craft nuanced accounts that help us better understand complicated individuals and the worlds they inhabited.” Beverly Gage’s biography of J. Edgar Hoover, she said, does just that.
“Today, Hoover is infamous for his abuses of power as director of the FBI for 48 years. He instigated unprecedented levels of government surveillance and repression. He orchestrated illegal wiretaps, spread false rumors, and even planted evidence to suppress groups he deemed subversive, aggressively targeting alleged communists and Civil Rights activists in particular. While he professed to be a nonpartisan law-enforcement administrator, he used the FBI to support those who shared his own political views.
“Gage certainly does not hesitate to document his many abuses of power, but she also strives to make sure her readers see Hoover as ‘more than a one-dimensional tyrant and backroom schemer.’ As she writes, ‘This book is less about judging him and more about understanding him.’ She does this, as all good biographers do, by helping her readers see his humanity, beginning with a compelling account of his deeply troubled childhood. She presents a portrait of a highly intelligent, ambitious, ruthless, flawed, and deeply contradictory man.
“Yet the additional and crucial message that Gage so expertly conveys is that, despite his reputation today, Hoover was extremely popular not only with political elites but with much of the American public. In doing so, she forces us to avoid demonizing one individual and instead look more honesty at our shared history. As she notes, ‘To look at him is also to look at ourselves, at what America valued and fought over during those years, what we tolerated and what we refused to see.’”
‘Stalin’
by Stephen Kotkin
Fredrik Logevall Laurence D. Belfer Professor of International Affairs, Professor of History
“It’s challenging to write in-depth studies of terrible people,” Logevall said. “Somehow monsters must be made to be human and complex if we are to understand why they behaved as they did. One work that succeeds marvelously in this regard is Stephen Kotkin’s ‘Stalin,’ a multi-volume biography of the Soviet dictator.
“This is biography on a grand scale, a textured, analytically nuanced narrative drawing on immense research in a wide array of sources. It is, moreover, a true ‘life and times’ study, in which Kotkin uses his skills as historian to contextualize Stalin’s life, situating him within the broader environment in which he rose to power. In so doing, Kotkin adroitly balances the roles played by individual agency on the one hand, with deeper, structural forces on the other, while also revealing much about those with whom Stalin shared the stage, not least Vladimir Lenin and Leon Trotsky. The two volumes bring out what the third will also surely show, and what more recent history amply demonstrates: that if circumstances make the leader, the reverse can be no less true.”
“There are villains, and then there is King Leopold II, the man at the center of Adam Hochschild’s brilliant and disturbing account of the Belgian who seized the territory surrounding the Congo River, plundered it, and destroyed its people,” said Thomas. “‘King Leopold’s Ghost’ not only brought to light the long-overlooked crimes of the despot, but it also tells stories of people who suffered from them and of those who resisted them. That project is now, and always, necessary, if we are to remain aware of the moral dimension of human affairs.”
Campus & Community
Harvard to advance corporate engagement strategy
Roche Genentech Innovation Center Boston will be based at Harvard’s Enterprise Research Campus in Allston, which they toured during its construction phase in March.Veasey Conway/Harvard Staff Photographer
Julie McDonough
Harvard Staff
June 23, 2025
8 min read
Findings by 2 committees highlight opportunities for growth and expa
Roche Genentech Innovation Center Boston will be based at Harvard’s Enterprise Research Campus in Allston, which they toured during its construction phase in March.
Veasey Conway/Harvard Staff Photographer
Julie McDonough
Harvard Staff
8 min read
Findings by 2 committees highlight opportunities for growth and expansion
Harvard is preparing to advance its corporate engagement strategy, based upon recommendations published last year by two ad hoc committees. Those committees found that the University could benefit from broadening and strengthening corporate engagement aligned with its core mission and values. Since assuming his role as provost, John Manning has continued to support this work as it has moved toward implementation.
The Corporate Relations Research Policy (CRRP) Committee, chaired by John Shaw, vice provost for research, and the Corporate Relations Researcher Engagement (CRRE) Committee, chaired by Amy Wagers, chair of the Department of Stem Cell and Regenerative Biology and the Forst Family Professor of Stem Cell and Regenerative Biology, undertook a review of the University’s current policies, processes, and support related to engaging with corporations. Supported through the Office of the Vice Provost for Research (OVPR), the committees published a series of recommendations aimed at better coordinating work across the University, exploring new ideas for engagement, and ensuring that students and faculty have the appropriate safeguards when engaging in corporate work.
“As a faculty member, and later as provost, I had witnessed the many benefits that can emerge when academic institutions and industry work together for the common good,” said President Alan M. Garber, who convened the committees in June 2023 as provost. “That’s why I asked Vice Provost for Research John Shaw to determine how we might facilitate those collaborations, including through the creation of these committees. Their work is enabling Harvard to leverage and create opportunities to both further our academic mission and push the frontiers of research, ultimately benefiting the public. I am excited to see the many ways in which our excellence will flourish as we implement the committees’ recommendations.”
For many years, Harvard has engaged with private corporations and related entities as a way to inform and strengthen its intellectual mission, support scholarship and students, and translate research discoveries to benefit society broadly. Current examples of corporate engagements include:
Harvard’s relationship with Roche, recently strengthened by the announcement of the Roche Genentech Innovation Center Boston, based at Harvard’s Enterprise Research Campus in Allston.
In 2022, Amazon Web Services (AWS) provided both sponsored and philanthropic support to advance fundamental research and innovation in quantum computing, as well as enable the AWS Impact Computing Project at the Harvard Data Science Institute, a collaboration aimed at reimagining data science to identify potential solutions for society’s most complex challenges.
Through the efforts of the Office of Technology Development (OTD), a wide range of corporate sponsorships and University-wide research alliances with companies, including Deerfield Management, Tata Group, and UCB, have helped advance scientific discovery across the University.
The CRRP Committee was charged with assessing and envisioning mechanisms for advancing corporate engagement in research support. The CRRE Committee was charged with identifying the roles and responsibilities of faculty and other researchers engaged in corporate-sponsored research and recommending safeguards to ensure these relationships are aligned with the University’s mission and benefit all those who participate. Both committees had inclusive representation of faculty and staff from across Harvard’s Schools and central administration and reached their recommendations with the aid of input from students and other stakeholders across the University, as well as from corporate entities that have established agreements with the University.
“While Harvard has benefited from corporate engagement within various Schools, departments, and centers, the research done by CRRP showed that engagement could be strengthened by a University-wide strategy and approach,” said Shaw. “Expanding the mechanisms we have to enable corporate alliances, and further enhancing coordination across the University, will allow us to strengthen the ways we support research.”
“The CRRE Committee’s review of corporate engagement from a stakeholder perspective found that interest and participation in corporate partnerships have both expanded and evolved in recent years, offering exciting opportunities for creative and unique programs beyond the traditional sponsored research agreements and graduate fellowship programs,” said Wagers. “With appropriate policies and safeguards in place for our students, faculty, and other researchers, our Harvard community stands to realize great benefits from increased engagement with corporate partners.”
Steven Currall was named the executive director and associate vice provost for academic-corporate initiatives.
Photo by Ryan Noone/University of South Florida
New executive director and associate vice provost for academic-corporate initiatives to drive steering committee and implementation
One of the top recommendations to emerge from the committees was to establish a Corporate Relations Steering Committee as a resource to foster University-wide coordination of corporate engagement activities that support research and build upon institutional strengths and capacity. Envisioned as a small, nimble group, made up of faculty members engaged in corporate collaboration and leadership from OVPR, OTD, and the University Development Office (UDO), the steering committee will:
Provide a University-wide strategy that considers the breadth of potential engagements including gifts, sponsored research, and new types of agreements;
Provide guidance for complex corporate engagements that span many different forms of opportunity across the University, including those that contain gifts and sponsored research or other components;
For instances of complex engagement, provide streamlining and support to ensure expedient and thorough review.
Other recommendations included expanding training and building awareness of policies and procedures, developing models and roadmaps for corporate engagement, creating a database of current engagements, and initiating pilot programs in priority research areas.
To lead this work, a core operational team of the steering committee has been established with leadership from Sam Liss (OTD), Anne Gotfredson (UDO), and Steven Currall (OVPR), who began June 16 as the new executive director and associate vice provost for academic-corporate initiatives. Currall served as special adviser to both committees, providing advisory support, data collection and analyses, and benchmarking. He is currently an associate in the John A. Paulson School of Engineering and Applied Sciences.
Prior to joining Harvard, Currall was dean of the Graduate School of Management at the University of California, Davis; provost and vice president for academic affairs at Southern Methodist University; and president of the University of South Florida. He previously served as a commissioner of the U.S. Council on Competitiveness, which is made up of university and corporate leaders committed to bolstering America’s investments in innovation, technology, and infrastructure. His publications have appeared in Nature, Nature Nanotechnology, Nature Reviews Bioengineering, Issues in Science and Technology, and leading management journals such as Organization Science.
“There is so much opportunity for Harvard to engage strategically with corporate entities across the University in ways that align with our academic mission and increase our ability to benefit society,” said Currall. “During our research for the committees, we heard from corporations who want to make a positive impact on the world and want to build a relationship with a university committed to the same goal.”
OVPR, in partnership with OTD, will launch this new effort with a series of workshops this summer to support faculty in advancing their research through corporate engagement. The workshops will also help clarify the current policies and mechanisms for corporate engagement, and begin to consider the opportunities for working strategically across the University.
“Corporate alliances play a vital role in advancing Harvard’s research programs and innovation,” said Vivian Berlin, executive director at Harvard Medical School and managing director of strategic partnerships at OTD. “At OTD, we look forward to continuing our support of Harvard researchers across the University to expand engagement with our corporate and venture collaborators.”
Following the workshops, one of the first initiatives the steering committee will undertake will be to identify and support innovative pilot projects, per the recommendations contained in the reports. These projects will foster connections between faculty-led research efforts across Schools in ways that allow Harvard to proactively identify and advance opportunities for investment and direct engagement in research from corporate entities. A call for faculty proposals is forthcoming.
“More extensive and better integration with corporate partners is critical to rapidly advancing our discoveries to the medicines that can help patients in need,” said Mark Namchuk, Puja and Samir Kaul Professor of the Practice of Biomedical Innovation and Translation and executive director of therapeutics translation. “This new effort will help faculty across the University to advance their own research and amplify its benefit to society as a whole.”
Florian Dörfler develops algorithms that keep our power grids stable using mathematics and, as he says, a strong willingness to take risks. He has now been awarded the Rössler Prize, the most prestigious honour for young professors at ETH Zurich.
Florian Dörfler develops algorithms that keep our power grids stable using mathematics and, as he says, a strong willingness to take risks. He has now been awarded the Rössler Prize, the most prestigious honour for young professors at ETH Zurich.
Researchers at ETH Zurich are developing a model in the lab made from human breast milk cells. They hope it will help them understand how breast milk is made – a little-researched area of female biology.
Researchers at ETH Zurich are developing a model in the lab made from human breast milk cells. They hope it will help them understand how breast milk is made – a little-researched area of female biology.
Six spin-offs of ETH Zurich were among the 18 finalists in this year’s Venture Awards. Three of them won in their respective categories, and three came in second. The Grand Prize went to spin-off MyNerva.
Six spin-offs of ETH Zurich were among the 18 finalists in this year’s Venture Awards. Three of them won in their respective categories, and three came in second. The Grand Prize went to spin-off MyNerva.
Why are more new housing units being constructed in Geneva each year, while Zurich is seeing a decline? Why are older residential buildings in Basel, Geneva and Lausanne being vertically extended, while they are being demolished and replaced in Zurich? ETH researchers provide new answers to the role of housing construction and its social impact.
Why are more new housing units being constructed in Geneva each year, while Zurich is seeing a decline? Why are older residential buildings in Basel, Geneva and Lausanne being vertically extended, while they are being demolished and replaced in Zurich? ETH researchers provide new answers to the role of housing construction and its social impact.
In the Northeastern United States, the Gulf of Maine represents one of the most biologically diverse marine ecosystems on the planet — home to whales, sharks, jellyfish, herring, plankton, and hundreds of other species. But even as this ecosystem supports rich biodiversity, it is undergoing rapid environmental change. The Gulf of Maine is warming faster than 99 percent of the world’s oceans, with consequences that are still unfolding.A new research initiative developing at MIT Sea Grant, called
In the Northeastern United States, the Gulf of Maine represents one of the most biologically diverse marine ecosystems on the planet — home to whales, sharks, jellyfish, herring, plankton, and hundreds of other species. But even as this ecosystem supports rich biodiversity, it is undergoing rapid environmental change. The Gulf of Maine is warming faster than 99 percent of the world’s oceans, with consequences that are still unfolding.
A new research initiative developing at MIT Sea Grant, called LOBSTgER — short for Learning Oceanic Bioecological Systems Through Generative Representations — brings together artificial intelligence and underwater photography to document the ocean life left vulnerable to these changes and share them with the public in new visual ways. Co-led by underwater photographer and visiting artist at MIT Sea Grant Keith Ellenbogen and MIT mechanical engineering PhD student Andreas Mentzelopoulos, the project explores how generative AI can expand scientific storytelling by building on field-based photographic data.
Just as the 19th-century camera transformed our ability to document and reveal the natural world — capturing life with unprecedented detail and bringing distant or hidden environments into view — generative AI marks a new frontier in visual storytelling. Like early photography, AI opens a creative and conceptual space, challenging how we define authenticity and how we communicate scientific and artistic perspectives.
In the LOBSTgER project, generative models are trained exclusively on a curated library of Ellenbogen’s original underwater photographs — each image crafted with artistic intent, technical precision, accurate species identification, and clear geographic context. By building a high-quality dataset grounded in real-world observations, the project ensures that the resulting imagery maintains both visual integrity and ecological relevance. In addition, LOBSTgER’s models are built using custom code developed by Mentzelopoulos to protect the process and outputs from any potential biases from external data or models. LOBSTgER’s generative AI builds upon real photography, expanding the researchers’ visual vocabulary to deepen the public’s connection to the natural world.
At its heart, LOBSTgER operates at the intersection of art, science, and technology. The project draws from the visual language of photography, the observational rigor of marine science, and the computational power of generative AI. By uniting these disciplines, the team is not only developing new ways to visualize ocean life — they are also reimagining how environmental stories can be told. This integrative approach makes LOBSTgER both a research tool and a creative experiment — one that reflects MIT’s long-standing tradition of interdisciplinary innovation.
Underwater photography in New England’s coastal waters is notoriously difficult. Limited visibility, swirling sediment, bubbles, and the unpredictable movement of marine life all pose constant challenges. For the past several years, Ellenbogen has navigated these challenges and is building a comprehensive record of the region’s biodiversity through the project, Space to Sea: Visualizing New England’s Ocean Wilderness. This large dataset of underwater images provides the foundation for training LOBSTgER’s generative AI models. The images span diverse angles, lighting conditions, and animal behaviors, resulting in a visual archive that is both artistically striking and biologically accurate.
LOBSTgER’s custom diffusion models are trained to replicate not only the biodiversity Ellenbogen documents, but also the artistic style he uses to capture it. By learning from thousands of real underwater images, the models internalize fine-grained details such as natural lighting gradients, species-specific coloration, and even the atmospheric texture created by suspended particles and refracted sunlight. The result is imagery that not only appears visually accurate, but also feels immersive and moving.
The models can both generate new, synthetic, but scientifically accurate images unconditionally (i.e., requiring no user input/guidance), and enhance real photographs conditionally (i.e., image-to-image generation). By integrating AI into the photographic workflow, Ellenbogen will be able to use these tools to recover detail in turbid water, adjust lighting to emphasize key subjects, or even simulate scenes that would be nearly impossible to capture in the field. The team also believes this approach may benefit other underwater photographers and image editors facing similar challenges. This hybrid method is designed to accelerate the curation process and enable storytellers to construct a more complete and coherent visual narrative of life beneath the surface.
In one key series, Ellenbogen captured high-resolution images of lion’s mane jellyfish, blue sharks, American lobsters, and ocean sunfish (Mola mola) while free diving in coastal waters. “Getting a high-quality dataset is not easy,” Ellenbogen says. “It requires multiple dives, missed opportunities, and unpredictable conditions. But these challenges are part of what makes underwater documentation both difficult and rewarding.”
Mentzelopoulos has developed original code to train a family of latent diffusion models for LOBSTgER grounded on Ellenbogen’s images. Developing such models requires a high level of technical expertise, and training models from scratch is a complex process demanding hundreds of hours of computation and meticulous hyperparameter tuning.
The project reflects a parallel process: field documentation through photography and model development through iterative training. Ellenbogen works in the field, capturing rare and fleeting encounters with marine animals; Mentzelopoulos works in the lab, translating those moments into machine-learning contexts that can extend and reinterpret the visual language of the ocean.
“The goal isn’t to replace photography,” Mentzelopoulos says. “It’s to build on and complement it — making the invisible visible, and helping people see environmental complexity in a way that resonates both emotionally and intellectually. Our models aim to capture not just biological realism, but the emotional charge that can drive real-world engagement and action.”
LOBSTgER points to a hybrid future that merges direct observation with technological interpretation. The team’s long-term goal is to develop a comprehensive model that can visualize a wide range of species found in the Gulf of Maine and, eventually, apply similar methods to marine ecosystems around the world.
The researchers suggest that photography and generative AI form a continuum, rather than a conflict. Photography captures what is — the texture, light, and animal behavior during actual encounters — while AI extends that vision beyond what is seen, toward what could be understood, inferred, or imagined based on scientific data and artistic vision. Together, they offer a powerful framework for communicating science through image-making.
In a region where ecosystems are changing rapidly, the act of visualizing becomes more than just documentation. It becomes a tool for awareness, engagement, and, ultimately, conservation. LOBSTgER is still in its infancy, and the team looks forward to sharing more discoveries, images, and insights as the project evolves.
Answer from the lead image: The left image was generated using using LOBSTgER’s unconditional models and the right image is real.
Can you spot the real photo? One of these blue shark images was captured 30 nautical miles off the coast of Cape Cod; the other was generated by LOBSTgER’s diffusion models after 30,000 training epochs (an epoch refers to one complete pass of an entire training dataset through a learning algorithm). Answer at the end of this article.
Since MIT opened the first-of-its-kind venture studio within a university in 2019, it has demonstrated how a systemic process can help turn research into impactful ventures. Now, MIT Proto Ventures is launching the “R&D Venture Studio Playbook,” a resource to help universities, national labs, and corporate R&D offices establish their own in-house venture studios. The online publication offers a comprehensive framework for building ventures from the ground up within research environments.
Since MIT opened the first-of-its-kind venture studio within a university in 2019, it has demonstrated how a systemic process can help turn research into impactful ventures.
Now, MIT Proto Ventures is launching the “R&D Venture Studio Playbook,” a resource to help universities, national labs, and corporate R&D offices establish their own in-house venture studios. The online publication offers a comprehensive framework for building ventures from the ground up within research environments.
“There is a huge opportunity cost to letting great research sit idle,” says Fiona Murray, associate dean for innovation at the MIT Sloan School of Management and a faculty director for Proto Ventures. “The venture studio model makes research systematic, rather than messy and happenstance.”
Bigger than MIT
The new playbook arrives amid growing national interest in revitalizing the United States’ innovation pipeline — a challenge underscored by the fact that just a fraction of academic patents ever reach commercialization.
“Venture-building across R&D organizations, and especially within academia, has been based on serendipity,” says MIT Professor Dennis Whyte, a faculty director for Proto Ventures who helped develop the playbook. “The goal of R&D venture studios is to take away the aspect of chance — to turn venture-building into a systemic process. And this is something not just MIT needs; all research universities and institutions need it.”
Indeed, MIT Proto Ventures is actively sharing the playbook with peer institutions, federal agencies, and corporate R&D leaders seeking to increase the translational return on their research investments.
“We’ve been following MIT’s Proto Ventures model with the vision of delivering new ventures that possess both strong tech push and strong market pull,” says Mark Arnold, associate vice president of Discovery to Impact and managing director of Texas startups at The University of Texas at Austin. “By focusing on market problems first and creating ventures with a supportive ecosystem around them, universities can accelerate the transition of ideas from the lab into real-world solutions.”
What’s in the playbook
The playbook outlines the venture studio model process followed by MIT Proto Ventures. MIT’s venture studio embeds full-time entrepreneurial scientists — called venture builders — inside research labs. These builders work shoulder-to-shoulder with faculty and graduate students to scout promising technologies, validate market opportunities, and co-create new ventures.
“We see this as an open-source framework for impact,” says MIT Proto Ventures Managing Director Gene Keselman. “Our goal is not just to build startups out of MIT — it’s to inspire innovation wherever breakthrough science is happening.”
The playbook was developed by the MIT Proto Ventures team — including Keselman, venture builders David Cohen-Tanugi and Andrew Inglis, and faculty leaders Murray, Whyte, Andrew Lo, Michael Cima, and Michael Short.
“This problem is universal, so we knew if it worked there’d be an opportunity to write the book on how to build a translational engine,” Keselman said. “We’ve had enough success now to be able to say, ‘Yes, this works, and here are the key components.’”
In addition to detailing core processes, the playbook includes case studies, sample templates, and guidance for institutions seeking to tailor the model to fit their unique advantages. It emphasizes that building successful ventures from R&D requires more than mentorship and IP licensing — it demands deliberate, sustained focus, and a new kind of translational infrastructure.
How it works
A key part of MIT’s venture studio is structuring efforts into distinct tracks or problem areas — MIT Proto Ventures calls these channels. Venture builders work in a single track that aligns with their expertise and interest. For example, Cohen-Tanugi is embedded in the MIT Plasma Science and Fusion Center, working in the Fusion and Clean Energy channel. His first two venture successes have been a venture using superconducting magnets for in-space propulsion and a deep-tech startup improving power efficiency in data centers.
“This playbook is both a call to action and a blueprint,” says Cohen-Tanugi, lead author of the playbook. “We’ve learned that world-changing inventions often remain on the lab bench not because they lack potential, but because no one is explicitly responsible for turning them into businesses. The R&D venture studio model fixes that.”
A huge number of opportunities to change the world through technological innovation are not being realized: We’re only seeing the tip of the iceberg. A new playbook from MIT Proto Ventures outlines how organizations can fix their R&D translation pipeline.
Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundat
Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.
The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
Avani Ahuja, a mechanical engineering and electrical engineering major, conducts research in the Conformable Decoders group, where she is focused on developing a “wearable conformable breast ultrasound patch” that makes ultrasounds for breast cancer more accessible.
“Doing research in the Media Lab has had a huge impact on me, especially in the ways that we think about inclusivity in research,” Ahuja says.
In her research group, Ahuja works under Canan Dagdeviren, the LG Career Development Professor of Media Arts and Sciences. Ahuja plans to pursue a PhD in electrical engineering. She aspires to conduct research in electromechanical systems for women’s health applications and teach at the university level.
“I want to thank Professor Dagdeviren for all her support. It’s an honor to receive this scholarship, and it’s amazing to see that women’s health research is getting recognized in this way,” Ahuja says.
Julianna Lian studies mechanochemistry, organic, and polymer chemistry in the lab of Professor Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry. In addition to her studies, she serves the MIT community as an emergency medical technician (EMT) with MIT Emergency Medical Services, is a member of MIT THINK, and a ClubChem mentorship chair.
“Receiving this award has been a tremendous opportunity to not only reflect on how much I have learned, but also on the many, many people I have had the chance to learn from,” says Lian. “I am deeply grateful for the guidance, support, and encouragement of these teachers, mentors, and friends. And I am excited to carry forward the lasting curiosity and excitement for chemistry that they have helped inspire in me.”
Lian’s career goals post-graduation include pursuing a PhD in organic chemistry, to conduct research at the interface of synthetic chemistry and materials science, aided by computation, and to teach at the university level.
Jacqueline Prawira, a materials science and engineering major, joined the Center of Decarbonization and Electrification of Industry as a first-year Undergraduate Research Opportunities Program student and became a co-inventor on a patent and a research technician at spinout company Rock Zero. She has also worked in collaboration with Indigenous farmers and Diné College students on the Navajo Nation.
“I’ve become significantly more cognizant of how I listen to people and stories, the tangled messiness of real-world challenges, and the critical skills needed to tackle complex sustainability issues,” Prawira says.
Prawira is mentored by Yet-Ming Chiang, professor of materials science and engineering. Her career goals are to pursue a PhD in materials science and engineering and to research sustainable materials and processes to solve environmental challenges and build a sustainable society.
“Receiving the prestigious title of 2025 Goldwater Scholar validates my current trajectory in innovating sustainable materials and demonstrates my growth as a researcher,” Prawira says. “This award signifies my future impact in building a society where sustainability is the norm, instead of just another option.”
Alex Tang studies the effects of immunotherapy and targeted molecular therapy on the tumor microenvironment in metastatic colorectal cancer patients. He is supervised by professors Jonathan Chen at Northwestern University and Nir Hacohen at the Broad Institute of MIT and Harvard.
“My mentors and collaborators have been instrumental to my growth since I joined the lab as a freshman. I am incredibly grateful for the generous mentorship and support of Professor Hacohen and Professor Chen, who have taught me how to approach scientific investigation with curiosity and rigor,” says Tang. “I’d also like to thank my advisor Professor Adam Martin and first-year advisor Professor Angela Belcher for their guidance throughout my undergraduate career thus far. I am excited to carry forward this work as I progress in my career.” Tang intends to pursue physician-scientist training following graduation.
The Scholarship Program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.
Clockwise from top left: Avani Ahuja, Julianna Lian, Alex Tang and Jacqueline Prawira are MIT’s newest Goldwater Scholars.
In 2025, MIT granted tenure to 11 faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Aeronautics and Astronautics, Biological Engineering, Chemical Engineering, Electrical Engineering and Computer Science (EECS) — which reports jointly to the School of Engineering and MIT Schwarzman College of Computing — Materials Science and Engineering, Mechanical Engineering, and Nuclear Science and Engineering.“It is with great pride that
In 2025, MIT granted tenure to 11 faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Aeronautics and Astronautics, Biological Engineering, Chemical Engineering, Electrical Engineering and Computer Science (EECS) — which reports jointly to the School of Engineering and MIT Schwarzman College of Computing — Materials Science and Engineering, Mechanical Engineering, and Nuclear Science and Engineering.
“It is with great pride that I congratulate the 11 newest tenured faculty members in the School of Engineering. Their dedication to advancing their fields, mentoring future innovators, and contributing to a vibrant academic community is truly inspiring,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science who will assume the title of MIT provost July 1. “This milestone is not only a testament to their achievements, but a promise of even greater impact ahead.”
This year’s newly tenured engineering faculty include:
Bryan Bryson, the Phillip and Susan Ragon Career Development Professor in the Department of Biological Engineering, conducts research in infectious diseases and immunoengineering. He is interested in developing new tools to dissect the complex dynamics of bacterial infection at a variety of scales ranging from single cells to infected animals, sitting in both “reference frames” by taking both an immunologist’s and a microbiologist’s perspective.
Connor Coley is the Class of 1957 Career Development Professor and associate professor of chemical engineering, with a shared appointment in EECS. His research group develops new computational methods at the intersection of artificial intelligence and chemistry with relevance to small molecule drug discovery, chemical synthesis, and structure elucidation.
Mohsen Ghaffari is the Steven and Renee Finn Career Development Professor and an associate professor in the EECS. His research explores the theory of distributed and parallel computation. He has done influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing, improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation.
Rafael Gomez-Bombarelli, the Paul M. Cook Development Professor and associate professor of materials science and engineering, works at the interface between machine learning and atomistic simulations. He uses computational tools to tackle design of materials in complex combinatorial search spaces, such as organic electronic materials, energy storage polymers and molecules, and heterogeneous (electro)catalysts.
Song Han, an associate professor in EECS, is a pioneer in model compression and TinyML. He has innovated in key areas of pruning quantization, parallelization, KV cache optimization, long-context learning, and multi-modal representation learning to minimize generative AI costs, and he designed the first hardware accelerator (EIE) to exploit weight sparsity.
Kaiming He, the Douglass Ross (1954) Career Development Professor of Software Technology and an associate professor in EECS, is best known for his work on deep residual networks (ResNets). His research focuses on building computer models that can learn representations and develop intelligence from and for the complex world, with the long-term goal of augmenting human intelligence with more capable artificial intelligence.
Phillip Isola, the Class of 1948 Career Development Professor and associate professor in EECS, studies computer vision, machine learning, and AI. His research aims to uncover fundamental principles of intelligence, with a particular focus on how models and representations of the world can be acquired through self-supervised learning, from raw sensory experience alone, and without the use of labeled data.
Mingda Li is the Class of 1947 Career Development Professor and an associate professor in the Department of Nuclear Science and Engineering. His research lies in characterization and computation.
Richard Linares is an associate professor in the Department of Aeronautics and Astronautics. His research focuses on astrodynamics, space systems, and satellite autonomy. Linares develops advanced computational tools and analytical methods to address challenges associated with space traffic management, space debris mitigation, and space weather modeling.
Jonathan Ragan-Kelley, an associate professor in EECS, has designed everything from tools for visual effects in movies to the Halide programming language that’s widely used in industry for photo editing and processing. His research focuses on high-performance computer graphics and accelerated computing, at the intersection of graphics with programming languages, systems, and architecture.
Arvind Satyanarayan is an associate professor in EECS. His research areas cover data visualization, human-computer interaction, and artificial intelligence and machine learning. He leads the MIT Visualization Group, which uses interactive data visualization as a petri dish to study intelligence augmentation — how computation can help amplify human cognition and creativity while respecting our agency.
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the colle
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.
The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.
Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.
“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”
Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.
Presentation highlights include:
“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.
“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.
“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.
Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”
“This is only the beginning,” she continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”
Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the MIT Generative AI Impact Consortium (MGAIC), kicks off an afternoon of presentations.
Science & Tech
Shining light on scientific superstar
The Vera C. Rubin Observatory, a new astronomy and astrophysics facility in Cerro Pachón, Chile.Courtesy of Vera C. Rubin Observatory
Kermit Pattison
Harvard Staff Writer
June 20, 2025
5 min read
Vera Rubin, whose dark-matter discoveries changed astronomy and physics, gets her due with namesake observatory, commemorative quarter
Ne
The Vera C. Rubin Observatory, a new astronomy and astrophysics facility in Cerro Pachón, Chile.
Courtesy of Vera C. Rubin Observatory
Kermit Pattison
Harvard Staff Writer
5 min read
Vera Rubin, whose dark-matter discoveries changed astronomy and physics, gets her due with namesake observatory, commemorative quarter
Nearly 80 years ago, a promising astronomy student named Vera Rubin passed up the opportunity for graduate study at Harvard. Now, a decade after her death, the pioneering astronomer will be celebrated on campus as a scientific superstar.
Rubin, whose discoveries about dark matter transformed astronomy and physics, will be honored with a weeklong series of events starting June 23, including the first public release of images from a new observatory bearing her name and the unveiling of a commemorative quarter.
“Intellectually, we’re all still staggering around with the consequences of the astronomy that she did,” said Christopher W. Stubbs, the Samuel C. Moncher Professor of Physics and of Astronomy and a member of the scientific team for the new observatory. “She brought scientific chaos that we’ve all been wrestling with ever since.”
The celebration will kick off Monday with a livestream of the first images from the Vera C. Rubin Observatory, a new astronomy and astrophysics facility in Cerro Pachón, Chile — a mountaintop site chosen because its aridity and its 2,600-meter altitude offer clear views of the sky.
Funded by the U.S. National Science Foundation and the U.S. Department of Energy, the 350-ton instrument is the most powerful survey telescope in the world and incorporates the largest digital camera ever constructed. It will take detailed images of the Southern Hemisphere sky to compile ultra-wide, ultra-high-definition, time-lapse video of the cosmos.
The first images will be released at 11 a.m. The main unveiling will take place at the National Academy of Sciences in Washington, D.C., and will be available online and to watch parties around the globe. The Harvard gathering will begin at 10:30 a.m. in Jefferson Lab 250.
More than two decades ago, Stubbs was among a group of scientists who won a federal grant to begin planning the new telescope. That proposal eventually grew into the $800 million observatory that will begin service this month after many twists and turns and collaborations with other institutions.
Stubbs said the new images will be spectacular.
“When you look at these pictures, you just kind of go, ‘Wow, look at all those galaxies!’” he said. “It’s like a wallpaper of galaxies — near ones, far ones, red ones, blue ones, interacting, colliding, different shapes, different sizes.”
The telescope will repeatedly sweep the sky in a 10-year survey. It will produce 20 terabytes of data every night and in one year will generate more optical astronomy data than all previous telescopes combined.
Vera Rubin measuring spectra in 1974.
Credit: Carnegie Institution for Science
“For solar system science, it’s a huge advance,” said Matt Holman, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics. “It’s going to find nearly a factor of 10 more objects than we presently know, ranging from asteroids that we’re concerned might hit the Earth, to the most distant Kuiper Belt objects, and perhaps even planets in our solar system that we don’t know about.”
On Thursday, Harvard will host a series of talks and a science festival to celebrate Rubin. The event will coincide with the release of the Rubin quarter, part of a U.S. Mint program honoring influential American women.
Rubin is best known as the scientist who shined light on dark matter. Born in 1928, she became fascinated by astronomy as a child looking at stars outside her bedroom window.
She studied astronomy at Vassar College and won admission to the graduate program at Harvard, but chose to study at Cornell because her new husband was enrolled there.
As she later recalled, the director of the Harvard observatory sent a formal letter acknowledging her withdrawal and added a handwritten note: “Damn you women. Every time I get a good one ready, she goes off and gets married.”
Later, Rubin earned a Ph.D. from Georgetown and studied the properties and motions of distant galaxies.
As a female scientist, she repeatedly encountered condescension from male colleagues and difficulty accessing scientific facilities and conferences. She spent most of her career as a researcher at the Carnegie Institute in Washington, D.C., and raised four children, all of whom became scientists.
She is best known for her work showing that most of the universe is invisible. Her calculations showed that galaxies must contain at least five to 10 times more mass than can be observed directly based on the light emitted by ordinary matter.
By the time she died in 2016 at age 88, her discoveries had been largely affirmed.
“First of all, she made an amazing discovery, and anyone of any background who does that is worthy of respect,” said Elana Urbach, a Harvard postdoctoral researcher who works with data from the new Rubin observatory and is organizing the campus celebration. “But the fact that she made this amazing discovery with the adversity that she faced does add something to her story — and make her more of a role model.”
Science & Tech
A taste for microbes
A video of a brooding octopus mother interacting with a fake egg that was doped with a microbial molecule isolated from rejected octopus egg bacteria. The mother uses her siphon to eject the egg from her clutch.
Kermit Pattison
Harvard Staff Writer
June 20, 2025
5 min read
New research reveals how the octopus uses arms to sense chemical clues from microbiom
A video of a brooding octopus mother interacting with a fake egg that was doped with a microbial molecule isolated from rejected octopus egg bacteria. The mother uses her siphon to eject the egg from her clutch.
Kermit Pattison
Harvard Staff Writer
5 min read
New research reveals how the octopus uses arms to sense chemical clues from microbiomes
The octopus is a creature with sensitive feelings.
Most of its 500 million neurons are in its arms, which explore the seafloor like eight muscular tongues. It navigates the deep with a “taste by touch” nervous system powered by 10,000 sensory cells in each individual suction cup.
Now, a new study by Harvard biologists reveals part of what the octopus is feeling — biochemical information from the microbial world. By tasting the biochemicals emitted by ever-changing bacterial communities, the animal gains information essential for survival, such as whether prey is safe to eat or whether unhealthy eggs should be ejected from the nest.
“Everything is coated by microbes, especially in these underwater worlds,” said Rebecka Sepela, a postdoctoral researcher and lead author of the new study. “These microbial communities are constantly restructuring in response to environmental conditions and will pump out different chemicals to reflect their surface-specific surroundings. The octopus senses the chemicals made by certain microbes, such as those growing on the surfaces of crabs or eggs, to distinguish the vitals of these surfaces.”
The sensory system of the octopus has been a topic of ongoing research at Harvard. In 2020, researchers in the lab of Nicholas Bellono, a professor of molecular and cellular biology, detailed how “chemotactile receptors” armed octopuses with their unique taste-by-touch capability. In 2023, the group described how these sensory organs had evolved from the acetylcholine receptors of their ancestors — but differently in octopuses than in their cephalopod relative the squid.
The California two-spot octopus, octopus bimaculoides.
Octopuses use chemotactile receptors to sense their surroundings.
Photo by Anik Grearson.
The California two-spot octopus incubates a clutch of eggs in her den.
Photo by Anik Grearson.
For the latest study, published Tuesday in the journal Cell, the Bellono team sought to better understand just what these organs were sensing. Octopuses forage by sweeping their arms over the seafloor and probing nooks and crannies for food. Even in the dark, they “blind feed” by relying only on the senses of their appendages. But it remained unclear just how they identified prey and other objects of interest.
To shed light on that question, the Harvard researchers simply let the animals show them what was important. The lab follows a “curiosity-based approach” of investigating biological novelties and trying to decipher the underlying mechanisms down to the level of molecules and proteins. It keeps California two-spot octopuses (a species native to the Pacific coast of the Americas) in saltwater tanks — with the lids fastened tight with Velcro straps and weighed by bricks. “We’ve had them open their tanks and get out,” explained Bellono.
In watching the octopuses, the researchers saw that two objects elicited strong reactions — the shells of fiddler crabs (a favorite food) and octopus eggs.
“It was very octopus-centric,” said Sepela. “By keeping the animal at the center of our study, we were able to find molecules in the environment that are actually meaningful to the animal.”
“By keeping the animal at the center of our study, we were able to find molecules in the environment that are actually meaningful to the animal.”
Rebecka Sepela
The researchers found that octopuses happily fed on live crabs, but rejected decayed ones. Octopus mothers avidly cleaned and groomed their clutches of eggs, but sometimes ejected infertile or dead eggs.
When the scientists examined these materials under an electron microscope, they found stark differences in microbial communities. Live crabs had only a few microbes on their shells, but decaying crabs were coated by many types of bacteria. Likewise, eggs rejected by octopus mothers were covered by spirillum-shaped bacteria while healthy eggs were not.
An image under UV light of glass vials filled with increasing concentration of a molecule isolated from microbial cultures (Left), fiddler crabs (Middle), and fiddler crabs that have been left to decay for two days (Right). The decayed fiddler crabs glow blue under UV light, just like the molecule that were isolated from microbial cultures.
The scientists used RNA barcoding to reveal the taxonomic identities and abundances of these microbial communities before examining the molecules emitted by the microbes — and the responses these substances elicited in the octopus. The team cultivated nearly 300 strains of marine bacteria and tested their effects on octopus chemotactile receptors that had been cloned in the lab.
They discovered that certain microbes activated certain octopus receptors. In one dramatic finding, the scientists identified a molecule emitted by bacteria commonly found on eggs rejected by the mother octopus. Researchers made a fake egg, coated it with the substance, and dropped it into an octopus nest. After briefly grooming the egg, the mother ejected it from her brood.
Microbes — or single-celled organisms — are the most abundant creatures on Earth. The body of a single human hosts around 39 trillion microbes. Likewise, the Earth, waters, and even the air teem with microbial communities known as microbiomes.
Rebecka Sepela and Nicholas Bellono.
Niles Singer/Harvard Staff Photographer
Research on microbiomes focuses on the relationship between microbes and their hosts — how gut bacteria aid in digestion, for example — but the new paper explores a lesser-known realm: how animals interact with external microbes and adapt to an ever-changing world. Science has only a murky understanding of how multicellular animals read this outside microbiome.
“There is a lot more to be explored,” said Bellono. “Microbes are present on almost every surface. We had a nice system to look at this in the octopus, but that doesn’t mean it’s not happening across life.”
The Bellono Lab collaborated on the research with the teams of Jon Clardy, a professor of biological chemistry and molecular pharmacology at Harvard Medical School, and Ryan Hibbs, a professor of neurobiology at the University of California, San Diego.
Now, an international group of astronomers led by the University of Cambridge have shown that we will be able to learn about the masses of the earliest stars by studying a specific radio signal – created by hydrogen atoms filling the gaps between star-forming regions – originating just a hundred million years after the Big Bang.
By studying how the first stars and their remnants affected this signal, called the 21-centimetre signal, the researchers have shown that future radio telescopes will h
Now, an international group of astronomers led by the University of Cambridge have shown that we will be able to learn about the masses of the earliest stars by studying a specific radio signal – created by hydrogen atoms filling the gaps between star-forming regions – originating just a hundred million years after the Big Bang.
By studying how the first stars and their remnants affected this signal, called the 21-centimetre signal, the researchers have shown that future radio telescopes will help us understand the very early universe, and how it transformed from a nearly homogeneous mass of mostly hydrogen to the incredible complexity we see today. Their results are reported in the journal Nature Astronomy.
“This is a unique opportunity to learn how the universe’s first light emerged from the darkness,” said co-author Professor Anastasia Fialkov from Cambridge’s Institute of Astronomy. “The transition from a cold, dark universe to one filled with stars is a story we’re only beginning to understand.”
The study of the universe’s most ancient stars hinges on the faint glow of the 21-centimetre signal, a subtle energy signal from over 13 billion years ago. This signal, influenced by the radiation from early stars and black holes, provides a rare window into the universe’s infancy.
Fialkov leads the theory group of REACH (the Radio Experiment for the Analysis of Cosmic Hydrogen). REACH is a radio antenna and is one of two major projects that could help us learn about the Cosmic Dawn and the Epoch of Reionisation, when the first stars reionised neutral hydrogen atoms in the universe.
Although REACH, which captures radio signals, is still in its calibration stage, it promises to reveal data about the early universe. Meanwhile, the Square Kilometre Array (SKA)—a massive array of antennas under construction—will map fluctuations in cosmic signals across vast regions of the sky.
Both projects are vital in probing the masses, luminosities, and distribution of the universe's earliest stars. In the current study, Fialkov – who is also a member of the SKA – and her collaborators developed a model that makes predictions for the 21-centimetre signal for both REACH and SKA, and found that the signal is sensitive to the masses of first stars.
“We are the first group to consistently model the dependence of the 21-centimetre signal of the masses of the first stars, including the impact of ultraviolet starlight and X-ray emissions from X-ray binaries produced when the first stars die,” said Fialkov, who is also a member of Cambridge’s Kavli Institute for Cosmology. “These insights are derived from simulations that integrate the primordial conditions of the universe, such as the hydrogen-helium composition produced by the Big Bang.”
In developing their theoretical model, the researchers studied how the 21-centimetre signal reacts to the mass distribution of the first stars, known as Population III stars. They found that previous studies have underestimated this connection as they did not account for the number and brightness of X-ray binaries – binary systems made of a normal star and a collapsed star – among Population III stars, and how they affect the 21-centimetre signal.
Unlike optical telescopes like the James Webb Space Telescope, which capture vivid images, radio astronomy relies on statistical analysis of faint signals. REACH and SKA will not be able to image individual stars, but will instead provide information about entire populations of stars, X-ray binary systems and galaxies.
“It takes a bit of imagination to connect radio data to the story of the first stars, but the implications are profound,” said Fialkov.
“The predictions we are reporting have huge implications for our understanding of the nature of the very first stars in the Universe,” said co-author Dr Eloy de Lera Acedo, Principal Investigator of the REACH telescope and PI at Cambridge of the SKA development activities. “We show evidence that our radio telescopes can tell us details about the mass of those first stars and how these early lights may have been very different from today’s stars.
“Radio telescopes like REACH are promising to unlock the mysteries of the infant Universe, and these predictions are essential to guide the radio observations we are doing from the Karoo, in South Africa.”
The research was supported in part by the Science and Technology Facilities Council (STFC), part of UK Research and Innovation (UKRI). Anastasia Fialkov is a Fellow of Magdalene College, Cambridge. Eloy de Lera Acedo is an STFC Ernest Rutherford Fellow and a Fellow of Selwyn College, Cambridge.
Understanding how the universe transitioned from darkness to light with the formation of the first stars and galaxies is a key turning point in the universe’s development, known as the Cosmic Dawn. However, even with the most powerful telescopes, we can’t directly observe these earliest stars, so determining their properties is one of the biggest challenges in astronomy.
This is a unique opportunity to learn how the universe’s first light emerged from the darkness
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimid
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.
“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimidis, associate professor of civil and environmental engineering at UMass Amherst and former visiting professor in the Department of Civil and Environmental Engineering at MIT, in a press release. “They are everywhere. It’s impossible to avoid, and their condition often shows significant deterioration. We know the numbers.”
The numbers, according to the American Society of Civil Engineers’ 2025 Report Card for America’s Infrastructure, are staggering: Across the United States, 49.1 percent of the nation’s 623,218 bridges are in “fair” condition and 6.8 percent are in “poor” condition. The projected cost to restore all of these failing bridges exceeds $191 billion.
A proof-of-concept repair took place last month on a small, corroded section of a bridge in Great Barrington, Massachusetts. The technique, called cold spray, can extend the life of beams, reinforcing them with newly deposited steel. The process accelerates particles of powdered steel in heated, compressed gas, and then a technician uses an applicator to spray the steel onto the beam. Repeated sprays create multiple layers, restoring thickness and other structural properties.
This method has proven to be an effective solution for other large structures like submarines, airplanes, and ships, but bridges present a problem on a greater scale. Unlike movable vessels, stationary bridges cannot be brought to the 3D printer — the printer must be brought on-site — and, to lessen systemic impacts, repairs must also be made with minimal disruptions to traffic, which the new approach allows.
“Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” says Gerasimidis. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that.”
“This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” says John Hart, Class of 1922 Professor and head of the Department of MechE at MIT. Hart and Haden Quinlan, senior program manager in the Center for Advanced Production Technologies at MIT, are leading MIT’s efforts in in the project. Hart is also faculty co-lead of the recently announced MIT Initiative for New Manufacturing.
“Integrating digital systems with advanced physical processing is the future of infrastructure,” says Quinlan. “We’re excited to have moved this technology beyond the lab and into the field, and grateful to our collaborators in making this work possible.”
UMass says the Massachusetts Department of Transportation (MassDOT) has been a valued research partner, helping to identify the problem and providing essential support for the development and demonstration of the technology. Technical guidance and funding support were provided by the MassDOT Highway Division and the Research and Technology Transfer Program.
Equipment for this project was supported through the Massachusetts Manufacturing Innovation Initiative, a statewide program led by the Massachusetts Technology Collaborative (MassTech)’s Center for Advanced Manufacturing that helps bridge the gap between innovation and commercialization in hard tech manufacturing.
“It’s a very Massachusetts success story,” Gerasimidis says. “It involves MassDOT being open-minded to new ideas. It involves UMass and MIT putting [together] the brains to do it. It involves MassTech to bring manufacturing back to Massachusetts. So, I think it’s a win-win for everyone involved here.”
The bridge in Great Barrington is scheduled for demolition in a few years. After demolition occurs, the recently-sprayed beams will be taken back to UMass for testing and measurement to study how well the deposited steel powder adhered to the structure in the field compared to in a controlled lab setting, if it corroded further after it was sprayed, and determine its mechanical properties.
This demonstration builds on several years of research by the UMass and MIT teams, including development of a “digital thread” approach to scan corroded beam surfaces and determine material deposition profiles, alongside laboratory studies of cold spray and other additive manufacturing approaches that are suited to field deployment.
Altogether, this work is a collaborative effort among UMass Amherst, MIT MechE, MassDOT, the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. Research reports are available on the MassDOT website.
Members of the UMass Amherst and MIT research team pose next to the 3D-printed patch. Haden Quinlan (front, kneeling), senior program manager in the Center for Advanced Production Technologies at MIT, is one of the researchers leading MIT’s efforts on the project.
Dr Alex Tsompanidis, senior researcher at the Autism Research Centre in the University of Cambridge, and the lead author of this new study, said: “Small variations in the prenatal levels of steroid hormones, like testosterone and oestrogen, can predict the rate of social and cognitive learning in infants and even the likelihood of conditions such as autism. This prompted us to consider their relevance for human evolution.”
One explanation for the evolution of the human brain may be in the way h
Dr Alex Tsompanidis, senior researcher at the Autism Research Centre in the University of Cambridge, and the lead author of this new study, said: “Small variations in the prenatal levels of steroid hormones, like testosterone and oestrogen, can predict the rate of social and cognitive learning in infants and even the likelihood of conditions such as autism. This prompted us to consider their relevance for human evolution.”
One explanation for the evolution of the human brain may be in the way humans adapted to be social. Professor Robin Dunbar, an Evolutionary Biologist at the University of Oxford and joint senior author of this new study said: “We’ve known for a long time that living in larger, more complex social groups is associated with increases in the size of the brain. But we still don’t know what mechanisms may link these behavioural and physical adaptations in humans.”
In this new paper, published today in Evolutionary Anthropology, the researchers now propose that the mechanism may be found in prenatal sex steroid hormones, such as testosterone or oestrogens, and the way these affect the developing brain and behaviour in humans.
Using ‘mini-brains’ – clusters of human neuronal cells that are grown in a petri dish from donors’ stem cells – other scientists have been able to study, for the first time, the effects of these hormones on the human brain. Recent discoveries have shown that testosterone can increase the size of the brain, while oestrogens can improve the connectivity between neurons.
In both humans and other primates such as chimpanzees and gorillas, the placenta can link the mother’s and baby’s endocrine systems to produce these hormones in varying amounts.
Professor Graham Burton, Founding Director of the Loke Centre of Trophoblast Research at the University of Cambridge and coauthor of the new paper, said: “The placenta regulates the duration of the pregnancy and the supply of nutrients to the fetus, both of which are crucial for the development of our species’ characteristically large brains. But the advantage of human placentas over those of other primates has been less clear.”
Two previous studies show that levels of oestrogen during pregnancy are higher in human pregnancies than in other primate species.
Another characteristic of humans as a species is our ability to form and maintain large social groups, larger than other primates and other extinct species, such as Neanderthals. But to be able to do this, humans must have adapted in ways that maintain high levels of fertility, while also reducing competition in large groups for mates and resources.
Prenatal sex steroid hormones, such as testosterone and oestrogen, are also important for regulating the way males and females interact and develop, a process known as sex differentiation. For example, having higher testosterone relative to oestrogen leads to more male-like features in anatomy (e.g., in physical size and strength) and in behaviour (e.g., in competition).
But in humans, while these on-average sex differences exist, they are reduced, compared to our closest primate relatives and relative to other extinct human species (such as the Neanderthals). Instead, anatomical features that are specific to humans appear to be related more to aspects of female rather than male biology, and to the effects of oestrogens (e.g., reduced body hair, and a large ratio between the second and fourth digit).
The researchers propose that the key to explain this may lie again with the placenta, which rapidly turns testosterone to oestrogens, using an enzyme called aromatase. Recent discoveries show that humans have higher levels of aromatase compared to macaques, and that males may have slightly higher levels compared to females.
Bringing all these lines of evidence together, the authors propose that high levels of prenatal sex steroid hormones in the womb, combined with increased placental function, may have made human brains larger and more interconnected. At the same time, a lower ratio of androgens (like testosterone) to oestrogens may have led to reductions in competition between males, while also improving fertility in females, allowing humans to form larger, more cohesive social groups.
Professor Simon Baron-Cohen, Director of the Autism Research Centre at the University of Cambridge and joint senior author on the paper, said: “We have been studying the effects of prenatal sex steroids on neurodevelopment for the past 20 years. This has led to the discovery that prenatal sex steroids are important for neurodiversity in human populations. This new hypothesis takes this further in arguing that these hormones may have also shaped the evolution of the human brain.”
Dr Tsompanidis added: “Our hypothesis puts pregnancy at the heart of our story as a species. The human brain is remarkable and unique, but it does not develop in a vacuum. Adaptations in the placenta and the way it produces sex steroid hormones may have been crucial for our brain’s evolution, and for the emergence of the cognitive and social traits that make us human.”
The placenta and the hormones it produces may have played a crucial role in the evolution of the human brain, while also leading to the behavioural traits that have made human societies able to thrive and expand, according to a new hypothesis proposed by researchers from the Universities of Cambridge and Oxford.
Our hypothesis puts pregnancy at the heart of our story as a species
Representatives from Cambridge University Press & Assessment, Cambridge Zero, Cambridge Institute for Sustainability Leadership and Cambridge Judge Business School convened the session and were joined by a range of experts working on climate change-related research and education. Every speaker from across higher education highlighted the importance of identifying misinformation and disinformation in tackling climate action. Read more about the workshop.
University of Cambridge experts highli
Representatives from Cambridge University Press & Assessment, Cambridge Zero, Cambridge Institute for Sustainability Leadership and Cambridge Judge Business School convened the session and were joined by a range of experts working on climate change-related research and education. Every speaker from across higher education highlighted the importance of identifying misinformation and disinformation in tackling climate action. Read more about the workshop.
University of Cambridge experts highlighted the key role of education in combatting climate misinformation at a Global Sustainable Development Congress (GDSC) workshop in Turkey.
Social security numbers stolen. Public transport halted. Hospital systems frozen until ransoms are paid. These are some of the damaging consequences of unsecure memory in computer systems.Over the past decade, public awareness of such cyberattacks has intensified, as their impacts have harmed individuals, corporations, and governments. Today, this awareness is coinciding with technologies that are finally mature enough to eliminate vulnerabilities in memory safety. "We are at a tipping point —
Social security numbers stolen. Public transport halted. Hospital systems frozen until ransoms are paid. These are some of the damaging consequences of unsecure memory in computer systems.
Over the past decade, public awareness of such cyberattacks has intensified, as their impacts have harmed individuals, corporations, and governments. Today, this awareness is coinciding with technologies that are finally mature enough to eliminate vulnerabilities in memory safety.
In an op-ed earlier this year in Communications of the ACM, Okhravi joined 20 other luminaries in the field of computer security to lay out a plan for achieving universal memory safety. They argue for a standardized framework as an essential next step to adopting memory-safety technologies throughout all forms of computer systems, from fighter jets to cell phones.
Memory-safety vulnerabilities occur when a program performs unintended or erroneous operations in memory. Such operations are prevalent, accounting for an estimated 70 percent of software vulnerabilities. If attackers gain access to memory, they can potentially steal sensitive information, alter program execution, or even take control of the computer system.
These vulnerabilities exist largely because common software programming languages, such as C or C++, are inherently memory-insecure. A simple error by a software engineer, perhaps one line in a system’s multimillion lines of code, could be enough for an attacker to exploit. In recent years, new memory-safe languages, such as Rust, have been developed. But rewriting legacy systems in new, memory-safe languages can be costly and complicated.
Okhravi focuses on the national security implications of memory-safety vulnerabilities. For the U.S. Department of Defense (DoD), whose systems comprise billions of lines of legacy C or C++ code, memory safety has long been a known problem. The National Security Agency (NSA) and the federal government have recently urged technology developers to eliminate memory-safety vulnerabilities from their products. Security concerns extend beyond military systems to widespread consumer products.
"Cell phones, for example, are not immediately important for defense or war-fighting, but if we have 200 million vulnerable cell phones in the nation, that’s a serious matter of national security," Okhravi says.
Memory-safe technology
In recent years, several technologies have emerged to help patch memory vulnerabilities in legacy systems. As the guest editor for a special issue of IEEE Security and Privacy, Okhravi solicited articles from top contributors in the field to highlight these technologies and the ways they can build on one another.
Some of these memory-safety technologies have been developed at Lincoln Laboratory, with sponsorship from DoD agencies. These technologies include TRACER and TASR, which are software products for Windows and Linux systems, respectively, that reshuffle the location of code in memory each time a program accesses it, making it very difficult for attackers to find exploits. These moving-target solutions have since been licensed by cybersecurity and cloud services companies.
"These technologies are quick wins, enabling us to make a lot of immediate impact without having to rebuild the whole system. But they are only a partial solution, a way of securing legacy systems while we are transitioning to safer languages," Okhravi says.
Innovative work is underway to make that transition easier. For example, the TRACTOR program at the U.S. Defense Advanced Research Projects Agency is developing artificial intelligence tools to automatically translate legacy C code to Rust. Lincoln Laboratory researchers will test and evaluate the translator for use in DoD systems.
Okhravi and his coauthors acknowledged in their op-ed that the timeline for full adoption of memory-safe systems is long — likely decades. It will require the deployment of a combination of new hardware, software, and techniques, each with their own adoption paths, costs, and disruptions. Organizations should prioritize mission-critical systems first.
"For example, the most important components in a fighter jet, such as the flight-control algorithm or the munition-handling logic, would be made memory-safe, say, within five years," Okhravi says. Subsystems less important to critical functions would have a longer time frame.
Use of memory-safe programming languages at Lincoln Laboratory
As Lincoln Laboratory continues its leadership in advancing memory-safety technologies, the Secure Resilient Systems and Technology Group has prioritized adopting memory-safe programming languages. "We’ve been investing in the group-wide use of Rust for the past six years as part of our broader strategy to prototype cyber-hardened mission systems and high-assurance cryptographic implementations for the DoD and intelligence community," says Roger Khazan, who leads the group. "Memory safety is fundamental to trustworthiness in these systems."
Rust’s strong guarantees around memory safety, along with its speed and ability to catch bugs early during development, make it especially well-suited for building secure and reliable systems. The laboratory has been using Rust to prototype and transition secure components for embedded, distributed, and cryptographic systems where resilience, performance, and correctness are mission-critical.
These efforts support both immediate U.S. government needs and a longer-term transformation of the national security software ecosystem. "They reflect Lincoln Laboratory’s broader mission of advancing technology in service to national security, grounded in technical excellence, innovation, and trust," Khazan adds.
A technology-agnostic framework
As new computer systems are designed, developers need a framework of memory-safety standards guiding them. Today, attempts to request memory safety in new systems are hampered by the lack of a clear set of definitions and practice.
Okhravi emphasizes that this standardized framework should be technology-agnostic and provide specific timelines with sets of requirements for different types of systems.
"In the acquisition process for the DoD, and even the commercial sector, when we are mandating memory safety, it shouldn’t be tied to a specific technology. It should be generic enough that different types of systems can apply different technologies to get there," he says.
Filling this gap not only requires building industrial consensus on technical approaches, but also collaborating with government and academia to bring this effort to fruition.
The need for collaboration was an impetus for the op-ed, and Okhravi says that the consortium of experts will push for standardization from their positions across industry, government, and academia. Contributors to the paper represent a wide range of institutes, from the University of Cambridge and SRI International to Microsoft and Google. Together, they are building momentum to finally root out memory vulnerabilities and the costly damages associated with them.
"We are seeing this cost-risk trade-off mindset shifting, partly because of the maturation of technology and partly because of such consequential incidents,” Okhravi says. "We hear all the time that such-and-such breach cost billions of dollars. Meanwhile, making the system secure might have cost 10 million dollars. Wouldn’t we have been better off making that effort?"
Memory-safety vulnerabilities are pervasive across computer systems. New technologies and unified efforts across government and industry can help change that.
The MIT Press announces the acquisition of textbook publisher University Science Books from AIP Publishing, a subsidiary of the American Institute of Physics (AIP).University Science Books was founded in 1978 to publish intermediate- and advanced-level science and reference books by respected authors, published with the highest design and production standards, and priced as affordably as possible. Over the years, USB’s authors have acquired international followings, and its textbooks in chemistr
The MIT Press announces the acquisition of textbook publisher University Science Books from AIP Publishing, a subsidiary of the American Institute of Physics (AIP).
University Science Books was founded in 1978 to publish intermediate- and advanced-level science and reference books by respected authors, published with the highest design and production standards, and priced as affordably as possible. Over the years, USB’s authors have acquired international followings, and its textbooks in chemistry, physics, and astronomy have been recognized as the gold standard in their respective disciplines. USB was acquired by AIP Publishing in 2021.
Bestsellers include John Taylor’s “Classical Mechanics,” the No. 1 adopted text for undergrad mechanics courses in the United States and Canada, and his “Introduction to Error Analysis;” and Don McQuarrie’s “Physical Chemistry: A Molecular Approach” (commonly known as “Big Red”), the second-most adopted physical chemistry textbook in the U.S.
“We are so pleased to have found a new home for USB’s prestigious list of textbooks in the sciences,” says Alix Vance, CEO of AIP Publishing. “With its strong STEM focus, academic rigor, and high production standards, the MIT Press is the perfect partner to continue the publishing legacy of University Science Books.”
“This acquisition is both a brand and content fit for the MIT Press,” says Amy Brand, director and publisher of the MIT Press. “USB’s respected science list will complement our long-established publishing history of publishing foundational texts in computer science, finance, and economics.”
The MIT Press will take over the USB list as of July 1, with inventory transferring to Penguin Random House Publishing Services, the MIT Press’ sales and distribution partner.
For details regarding University Science Books titles, inventory, and how to order, please contact the MIT Press.
Established in 1962, The MIT Press is one of the largest and most distinguished university presses in the world and a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.
AIP Publishing is a wholly owned not-for-profit subsidiary of the AIP and supports the charitable, scientific, and educational purposes of AIP through scholarly publishing activities on its behalf and on behalf of our publishing partners.
The MIT Press will take over the USB list as of July 1, with inventory transferring to Penguin Random House Publishing Services, the MIT Press’ sales and distribution partner.
Today, all non-Africans are known to have descended from a small group of people that ventured into Eurasia around 50,000 years ago. However, fossil evidence shows that there were numerous failed dispersals before this time that left no detectable traces in living people.
In a new study published today in the journal in Nature, scientists say that from around 70,000 years ago, early humans began to exploit different habitat types in Africa in ways not seen before.
At this time, our ancestors s
Today, all non-Africans are known to have descended from a small group of people that ventured into Eurasia around 50,000 years ago. However, fossil evidence shows that there were numerous failed dispersals before this time that left no detectable traces in living people.
In a new study published today in the journal in Nature, scientists say that from around 70,000 years ago, early humans began to exploit different habitat types in Africa in ways not seen before.
At this time, our ancestors started to live in the equatorial forests of West and Central Africa, and in the Sahara and Sahel desert regions of North Africa, where they encountered a range of new environmental conditions.
As they adapted to life in these diverse habitats, early humans gained the flexibility to tackle the range of novel environmental conditions they would encounter during their expansion out of Africa.
This increase in the human niche may have been the result of social adaptations, such as long-distance social networks, which allowed for an increase in cultural exchange. The process would have been self-reinforcing: as people started to inhabit a wider proportion of the African continent, regions previously disconnected would have come into contact, leading to further exchanges and possibly even greater flexibility. The final outcome was that our species became the ultimate generalist, able to tackle a wider range of environments.
Andrea Manica, Professor of Evolutionary Ecology in the University of Cambridge’s Department of Zoology, who co-led the study with Professor Eleanor Scerri from the Max Plank Institute of Bioanthropology in Germany, said: “Around 70,000-50,000 years ago, the easiest route out of Africa would have been more challenging than during previous periods, and yet this expansion was big - and ultimately successful.”
Manica added: “It’s incredibly exciting that we were able to look back in time and pinpoint the changes that enabled our ancestors to successfully migrate out of Africa.”
Dr Emily Hallett of Loyola University Chicago, co-lead author of the study, said: “We assembled a dataset of archaeological sites and environmental information covering the last 120,000 years in Africa. We used methods developed in ecology to understand changes in human environmental niches - the habitats humans can use and thrive in - during this time.”
Dr Michela Leonardi at the University of Cambridge and London’s Natural History Museum, the study’s other lead author, said: “Our results showed that the human niche began to expand significantly from 70,000 years ago, and that this expansion was driven by humans increasing their use of diverse habitat types, from forests to arid deserts.”
Many explanations for the uniquely successful dispersal out of Africa have previously been made, from technological innovations, to immunities granted by interbreeding with Eurasian hominins. But there is no evidence of technological innovation, and previous interbreeding does not appear to have helped the long-term success of previous attempts to spread out of Africa.
“Unlike previous humans dispersing out of Africa, those human groups moving into Eurasia after around 60-50,000 years ago were equipped with a distinctive ecological flexibility as a result of coping with climatically challenging habitats,” said Scerri. “This likely provided a key mechanism for the adaptive success of our species beyond their African homeland.”
Previous human dispersals out of Africa - which were not successful in the long term - seem to have happened during particularly favourable windows of increased rainfall in the Saharo-Arabian desert belt, which created ‘green corridors’ for people to move into Eurasia.
The environmental flexibility developed in Africa from around 70,000 years ago ultimately resulted in modern humans’ unique ability to adapt and thrive in diverse environments, and to cope with varying environmental conditions throughout life.
This research was supported by funding from the Max Planck Society, European Research Council and Leverhulme Trust.
Adapted from a press release by the Max Planck Institute of Geoanthropology, Germany
Before the ‘Out of Africa’ migration that led our ancestors into Eurasia and beyond, human populations learned to adapt to new and challenging habitats including African forests and deserts, which was key to the long-term success of our species’ dispersal.
It’s incredibly exciting that we were able to look back in time and pinpoint the changes that enabled our ancestors to successfully migrate out of Africa.
Cornell Tech Assistant Professor Raaz Dwivedi has co-founded Traversal, a startup emerging from stealth this week with a mission: to transform how modern software systems detect and resolve outages using artificial intelligence.
Cornell Tech Assistant Professor Raaz Dwivedi has co-founded Traversal, a startup emerging from stealth this week with a mission: to transform how modern software systems detect and resolve outages using artificial intelligence.
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the colle
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.
The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.
Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.
“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”
Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.
Presentation highlights include:
“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.
“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.
“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.
Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”
“This is only the beginning,” she continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”
Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the MIT Generative AI Impact Consortium (MGAIC), kicks off an afternoon of presentations.
The Princeton Plasma Physics Laboratory will partner with venture capital firm SOSV and the New Jersey Economic Development Authority to advance plasma technologies with an eye toward commercial impact.
The Princeton Plasma Physics Laboratory will partner with venture capital firm SOSV and the New Jersey Economic Development Authority to advance plasma technologies with an eye toward commercial impact.
Volcanic islands, such as the islands of Hawaii and the Caribbean, are surrounded by coral reefs that encircle an island in a labyrinthine, living ring. A coral reef is punctured at points by reef passes — wide channels that cut through the coral and serve as conduits for ocean water and nutrients to filter in and out. These watery passageways provide circulation throughout a reef, helping to maintain the health of corals by flushing out freshwater and transporting key nutrients.Now, MIT scienti
Volcanic islands, such as the islands of Hawaii and the Caribbean, are surrounded by coral reefs that encircle an island in a labyrinthine, living ring. A coral reef is punctured at points by reef passes — wide channels that cut through the coral and serve as conduits for ocean water and nutrients to filter in and out. These watery passageways provide circulation throughout a reef, helping to maintain the health of corals by flushing out freshwater and transporting key nutrients.
Now, MIT scientists have found that reef passes are shaped by island rivers. In a study appearing today in the journal Geophysical Research Letters, the team shows that the locations of reef passes along coral reefs line up with where rivers funnel out from an island’s coast.
Their findings provide the first quantitative evidence of rivers forming reef passes. Scientists and explorers had speculated that this may be the case: Where a river on a volcanic island meets the coast, the freshwater and sediment it carries flows toward the reef, where a strong enough flow can tunnel into the surrounding coral. This idea has been proposed from time to time but never quantitatively tested, until now.
“The results of this study help us to understand how the health of coral reefs depends on the islands they surround,” says study author Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT.
“A lot of discussion around rivers and their impact on reefs today has been negative because of human impact and the effects of agricultural practices,” adds lead author Megan Gillen, a graduate student in the MIT-WHOI Joint Program in Oceanography. “This study shows the potential long-term benefits rivers can have on reefs, which I hope reshapes the paradigm and highlights the natural state of rivers interacting with reefs.”
The study’s other co-author is Andrew Ashton of the Woods Hole Oceanographic Institution.
Drawing the lines
The new study is based on the team’s analysis of the Society Islands, a chain of islands in the South Pacific Ocean that includes Tahiti and Bora Bora. Gillen, who joined the MIT-WHOI program in 2020, was interested in exploring connections between coral reefs and the islands they surround. With limited options for on-site work during the Covid-19 pandemic, she and Perron looked to see what they could learn through satellite images and maps of island topography. They did a quick search using Google Earth and zeroed in on the Society Islands for their uniquely visible reef and island features.
“The islands in this chain have these iconic, beautiful reefs, and we kept noticing these reef passes that seemed to align with deeply embayed portions of the coastline,” Gillen says. “We started asking ourselves, is there a correlation here?”
Viewed from above, the coral reefs that circle some islands bear what look to be notches, like cracks that run straight through a ring. These breaks in the coral are reef passes — large channels that run tens of meters deep and can be wide enough for some boats to pass through. On first look, Gillen noticed that the most obvious reef passes seemed to line up with flooded river valleys — depressions in the coastline that have been eroded over time by island rivers that flow toward the ocean. She wondered whether and to what extent island rivers might shape reef passes.
“People have examined the flow through reef passes to understand how ocean waves and seawater circulate in and out of lagoons, but there have been no claims of how these passes are formed,” Gillen says. “Reef pass formation has been mentioned infrequently in the literature, and people haven’t explored it in depth.”
Reefs unraveled
To get a detailed view of the topography in and around the Society Islands, the team used data from the NASA Shuttle Radar Topography Mission — two radar antennae that flew aboard the space shuttle in 1999 and measured the topography across 80 percent of the Earth’s surface.
The researchers used the mission’s topographic data in the Society Islands to create a map of every drainage basin along the coast of each island, to get an idea of where major rivers flow or once flowed. They also marked the locations of every reef pass in the surrounding coral reefs. They then essentially “unraveled” each island’s coastline and reef into a straight line, and compared the locations of basins versus reef passes.
“Looking at the unwrapped shorelines, we find a significant correlation in the spatial relationship between these big river basins and where the passes line up,” Gillen says. “So we can say that statistically, the alignment of reef passes and large rivers does not seem random. The big rivers have a role in forming passes.”
As for how rivers shape the coral conduits, the team has two ideas, which they call, respectively, reef incision and reef encroachment. In reef incision, they propose that reef passes can form in times when the sea level is relatively low, such that the reef is exposed above the sea surface and a river can flow directly over the reef. The water and sediment carried by the river can then erode the coral, progressively carving a path through the reef.
When sea level is relatively higher, the team suspects a reef pass can still form, through reef encroachment. Coral reefs naturally live close to the water surface, where there is light and opportunity for photosynthesis. When sea levels rise, corals naturally grow upward and inward toward an island, to try to “catch up” to the water line.
“Reefs migrate toward the islands as sea levels rise, trying to keep pace with changing average sea level,” Gillen says.
However, part of the encroaching reef can end up in old river channels that were previously carved out by large rivers and that are lower than the rest of the island coastline. The corals in these river beds end up deeper than light can extend into the water column, and inevitably drown, leaving a gap in the form of a reef pass.
“We don’t think it’s an either/or situation,” Gillen says. “Reef incision occurs when sea levels fall, and reef encroachment happens when sea levels rise. Both mechanisms, occurring over dozens of cycles of sea-level rise and island evolution, are likely responsible for the formation and maintenance of reef passes over time.”
The team also looked to see whether there were differences in reef passes in older versus younger islands. They observed that younger islands were surrounded by more reef passes that were spaced closer together, versus older islands that had fewer reef passes that were farther apart.
As islands age, they subside, or sink, into the ocean, which reduces the amount of land that funnels rainwater into rivers. Eventually, rivers are too weak to keep the reef passes open, at which point, the ocean likely takes over, and incoming waves could act to close up some passes.
Gillen is exploring ideas for how rivers, or river-like flow, can be engineered to create paths through coral reefs in ways that would promote circulation and benefit reef health.
“Part of me wonders: If you had a more persistent flow, in places where you don’t naturally have rivers interacting with the reef, could that potentially be a way to increase health, by incorporating that river component back into the reef system?” Gillen says. “That’s something we’re thinking about.”
This research was supported, in part, by the WHOI Watson and Von Damm fellowships.
Pictured is a shallow reef flat channel on the atoll of Tetiaroa, located north of Tahiti in the Society Islands. MIT researchers have found evidence that island rivers may carve out paths in surrounding reefs over time, helping to maintain their health over millions of years.
Water makes up around 60 percent of the human body. More than half of this water sloshes around inside the cells that make up organs and tissues. Much of the remaining water flows in the nooks and crannies between cells, much like seawater between grains of sand.Now, MIT engineers have found that this “intercellular” fluid plays a major role in how tissues respond when squeezed, pressed, or physically deformed. Their findings could help scientists understand how cells, tissues, and organs physic
Water makes up around 60 percent of the human body. More than half of this water sloshes around inside the cells that make up organs and tissues. Much of the remaining water flows in the nooks and crannies between cells, much like seawater between grains of sand.
Now, MIT engineers have found that this “intercellular” fluid plays a major role in how tissues respond when squeezed, pressed, or physically deformed. Their findings could help scientists understand how cells, tissues, and organs physically adapt to conditions such as aging, cancer, diabetes, and certain neuromuscular diseases.
In a paper appearing today in Nature Physics, the researchers show that when a tissue is pressed or squeezed, it is more compliant and relaxes more quickly when the fluid between its cells flows easily. When the cells are packed together and there is less room for intercellular flow, the tissue as a whole is stiffer and resists being pressed or squeezed.
The findings challenge conventional wisdom, which has assumed that a tissue’s compliance depends mainly on what’s inside, rather than around, a cell. Now that the researchers have shown that intercellular flow determines how tissues will adapt to physical forces, the results can be applied to understand a wide range of physiological conditions, including how muscles withstand exercise and recover from injury, and how a tissue’s physical adaptability may affect the progression of aging, cancer, and other medical conditions.
The team envisions the results could also inform the design of artificial tissues and organs. For instance, in engineering artificial tissue, scientists might optimize intercellular flow within the tissue to improve its function or resilience. The researchers suspect that intercellular flow could also be a route for delivering nutrients or therapies, either to heal a tissue or eradicate a tumor.
“People know there is a lot of fluid between cells in tissues, but how important that is, in particular in tissue deformation, is completely ignored,” says Ming Guo, associate professor of mechanical engineering at MIT. “Now we really show we can observe this flow. And as the tissue deforms, flow between cells dominates the behavior. So, let’s pay attention to this when we study diseases and engineer tissues.”
Guo is a co-author of the new study, which includes lead author and MIT postdoc Fan Liu PhD ’24, along with Bo Gao and Hui Li of Beijing Normal University, and Liran Lei and Shuainan Liu of Peking Union Medical College.
Pressed and squeezed
The tissues and organs in our body are constantly undergoing physical deformations, from the large stretch and strain of muscles during motion to the small and steady contractions of the heart. In some cases, how easily tissues adapt to deformation can relate to how quickly a person can recover from, for instance, an allergic reaction, a sports injury, or a brain stroke. However, exactly what sets a tissue’s response to deformation is largely unknown.
Guo and his group at MIT looked into the mechanics of tissue deformation, and the role of intercellular flow in particular, following a study they published in 2020. In that study, they focused on tumors and observed the way in which fluid can flow from the center of a tumor out to its edges, through the cracks and crevices between individual tumor cells. They found that when a tumor was squeezed or pressed, the intercellular flow increased, acting as a conveyor belt to transport fluid from the center to the edges. Intercellular flow, they found, could fuel tumor invasion into surrounding regions.
In their new study, the team looked to see what role this intercellular flow might play in other, noncancerous tissues.
“Whether you allow the fluid to flow between cells or not seems to have a major impact,” Guo says. “So we decided to look beyond tumors to see how this flow influences how other tissues respond to deformation.”
A fluid pancake
Guo, Liu, and their colleagues studied the intercellular flow in a variety of biological tissues, including cells derived from pancreatic tissue. They carried out experiments in which they first cultured small clusters of tissue, each measuring less than a quarter of a millimeter wide and numbering tens of thousands of individual cells. They placed each tissue cluster in a custom-designed testing platform that the team built specifically for the study.
“These microtissue samples are in this sweet zone where they are too large to see with atomic force microscopy techniques and too small for bulkier devices,” Guo says. “So, we decided to build a device.”
The researchers adapted a high-precision microbalance that measures minute changes in weight. They combined this with a step motor that is designed to press down on a sample with nanometer precision. The team placed tissue clusters one at a time on the balance and recorded each cluster’s changing weight as it relaxed from a sphere into the shape of a pancake in response to the compression. The team also took videos of the clusters as they were squeezed.
For each type of tissue, the team made clusters of varying sizes. They reasoned that if the tissue’s response is ruled by the flow between cells, then the bigger a tissue, the longer it should take for water to seep through, and therefore, the longer it should take the tissue to relax. It should take the same amount of time, regardless of size, if a tissue’s response is determined by the structure of the tissue rather than fluid.
Over multiple experiments with a variety of tissue types and sizes, the team observed a similar trend: The bigger the cluster, the longer it took to relax, indicating that intercellular flow dominates a tissue’s response to deformation.
“We show that this intercellular flow is a crucial component to be considered in the fundamental understanding of tissue mechanics and also applications in engineering living systems,” Liu says.
Going forward, the team plans to look into how intercellular flow influences brain function, particularly in disorders such as Alzheimer’s disease.
“Intercellular or interstitial flow can help you remove waste and deliver nutrients to the brain,” Liu adds. “Enhancing this flow in some cases might be a good thing.”
“As this work shows, as we apply pressure to a tissue, fluid will flow,” Guo says. “In the future, we can think of designing ways to massage a tissue to allow fluid to transport nutrients between cells.”
These images use color markers — blue for nuclei, red for cell membranes, and green for fluid — to show that spaces between cells shrink as fluid moves out during tissue compression, from left to right and top to bottom.
Researchers are developing a living material that actively extracts carbon dioxide from the atmosphere. Photosynthetic cyanobacteria grow inside it, forming biomass and solid minerals and thus binding CO2 in two different manners.
Researchers are developing a living material that actively extracts carbon dioxide from the atmosphere. Photosynthetic cyanobacteria grow inside it, forming biomass and solid minerals and thus binding CO2 in two different manners.
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimid
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.
“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimidis, associate professor of civil and environmental engineering at UMass Amherst and former visiting professor in the Department of Civil and Environmental Engineering at MIT, in a press release. “They are everywhere. It’s impossible to avoid, and their condition often shows significant deterioration. We know the numbers.”
The numbers, according to the American Society of Civil Engineers’ 2025 Report Card for America’s Infrastructure, are staggering: Across the United States, 49.1 percent of the nation’s 623,218 bridges are in “fair” condition and 6.8 percent are in “poor” condition. The projected cost to restore all of these failing bridges exceeds $191 billion.
A proof-of-concept repair took place last month on a small, corroded section of a bridge in Great Barrington, Massachusetts. The technique, called cold spray, can extend the life of beams, reinforcing them with newly deposited steel. The process accelerates particles of powdered steel in heated, compressed gas, and then a technician uses an applicator to spray the steel onto the beam. Repeated sprays create multiple layers, restoring thickness and other structural properties.
This method has proven to be an effective solution for other large structures like submarines, airplanes, and ships, but bridges present a problem on a greater scale. Unlike movable vessels, stationary bridges cannot be brought to the 3D printer — the printer must be brought on-site — and, to lessen systemic impacts, repairs must also be made with minimal disruptions to traffic, which the new approach allows.
“Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” says Gerasimidis. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that.”
“This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” says John Hart, Class of 1922 Professor and head of the Department of MechE at MIT. Hart and Haden Quinlan, senior program manager in the Center for Advanced Production Technologies at MIT, are leading MIT’s efforts in in the project. Hart is also faculty co-lead of the recently announced MIT Initiative for New Manufacturing.
“Integrating digital systems with advanced physical processing is the future of infrastructure,” says Quinlan. “We’re excited to have moved this technology beyond the lab and into the field, and grateful to our collaborators in making this work possible.”
UMass says the Massachusetts Department of Transportation (MassDOT) has been a valued research partner, helping to identify the problem and providing essential support for the development and demonstration of the technology. Technical guidance and funding support were provided by the MassDOT Highway Division and the Research and Technology Transfer Program.
Equipment for this project was supported through the Massachusetts Manufacturing Innovation Initiative, a statewide program led by the Massachusetts Technology Collaborative (MassTech)’s Center for Advanced Manufacturing that helps bridge the gap between innovation and commercialization in hard tech manufacturing.
“It’s a very Massachusetts success story,” Gerasimidis says. “It involves MassDOT being open-minded to new ideas. It involves UMass and MIT putting [together] the brains to do it. It involves MassTech to bring manufacturing back to Massachusetts. So, I think it’s a win-win for everyone involved here.”
The bridge in Great Barrington is scheduled for demolition in a few years. After demolition occurs, the recently-sprayed beams will be taken back to UMass for testing and measurement to study how well the deposited steel powder adhered to the structure in the field compared to in a controlled lab setting, if it corroded further after it was sprayed, and determine its mechanical properties.
This demonstration builds on several years of research by the UMass and MIT teams, including development of a “digital thread” approach to scan corroded beam surfaces and determine material deposition profiles, alongside laboratory studies of cold spray and other additive manufacturing approaches that are suited to field deployment.
Altogether, this work is a collaborative effort among UMass Amherst, MIT MechE, MassDOT, the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. Research reports are available on the MassDOT website.
Members of the UMass Amherst and MIT research team pose next to the 3D-printed patch. Haden Quinlan (front, kneeling), senior program manager in the Center for Advanced Production Technologies at MIT, is one of the researchers leading MIT’s efforts on the project.
When the Earth froze over, where did life shelter? MIT scientists say one refuge may have been pools of melted ice that dotted the planet’s icy surface.In a study appearing today in Nature Communications, the researchers report that 635 million to 720 million years ago, during periods known as “Snowball Earth,” when much of the planet was covered in ice, some of our ancient cellular ancestors could have waited things out in meltwater ponds.The scientists found that eukaryotes — complex cellular
When the Earth froze over, where did life shelter? MIT scientists say one refuge may have been pools of melted ice that dotted the planet’s icy surface.
In a study appearing today in Nature Communications, the researchers report that 635 million to 720 million years ago, during periods known as “Snowball Earth,” when much of the planet was covered in ice, some of our ancient cellular ancestors could have waited things out in meltwater ponds.
The scientists found that eukaryotes — complex cellular lifeforms that eventually evolved into the diverse multicellular life we see today — could have survived the global freeze by living in shallow pools of water. These small, watery oases may have persisted atop relatively shallow ice sheets present in equatorial regions. There, the ice surface could accumulate dark-colored dust and debris from below, which enhanced its ability to melt into pools. At temperatures hovering around 0 degrees Celsius, the resulting meltwater ponds could have served as habitable environments for certain forms of early complex life.
The team drew its conclusions based on an analysis of modern-day meltwater ponds. Today in Antarctica, small pools of melted ice can be found along the margins of ice sheets. The conditions along these polar ice sheets are similar to what likely existed along ice sheets near the equator during Snowball Earth.
The researchers analyzed samples from a variety of meltwater ponds located on the McMurdo Ice Shelf in an area that was first described by members of Robert Falcon Scott's 1903 expedition as “dirty ice.” The MIT researchers discovered clear signatures of eukaryotic life in every pond. The communities of eukaryotes varied from pond to pond, revealing a surprising diversity of life across the setting. The team also found that salinity plays a key role in the kind of life a pond can host: Ponds that were more brackish or salty had more similar eukaryotic communities, which differed from those in ponds with fresher waters.
“We’ve shown that meltwater ponds are valid candidates for where early eukaryotes could have sheltered during these planet-wide glaciation events,” says lead author Fatima Husain, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This shows us that diversity is present and possible in these sorts of settings. It’s really a story of life’s resilience.”
The study’s MIT co-authors include Schlumberger Professor of Geobiology Roger Summons and former postdoc Thomas Evans, along with Jasmin Millar of Cardiff University, Anne Jungblut at the Natural History Museum in London, and Ian Hawes of the University of Waikato in New Zealand.
Polar plunge
“Snowball Earth” is the colloquial term for periods of time in Earth history during which the planet iced over. It is often used as a reference to the two consecutive, multi-million-year glaciation events which took place during the Cryogenian Period, which geologists refer to as the time between 635 and 720 million years ago. Whether the Earth was more of a hardened snowball or a softer “slushball” is still up for debate. But scientists are certain of one thing: Most of the planet was plunged into a deep freeze, with average global temperatures of minus 50 degrees Celsius. The question has been: How and where did life survive?
“We’re interested in understanding the foundations of complex life on Earth. We see evidence for eukaryotes before and after the Cryogenian in the fossil record, but we largely lack direct evidence of where they may have lived during,” Husain says. “The great part of this mystery is, we know life survived. We’re just trying to understand how and where.”
There are a number of ideas for where organisms could have sheltered during Snowball Earth, including in certain patches of the open ocean (if such environments existed), in and around deep-sea hydrothermal vents, and under ice sheets. In considering meltwater ponds, Husain and her colleagues pursued the hypothesis that surface ice meltwaters may also have been capable of supporting early eukaryotic life at the time.
“There are many hypotheses for where life could have survived and sheltered during the Cryogenian, but we don’t have excellent analogs for all of them,” Husain notes. “Above-ice meltwater ponds occur on Earth today and are accessible, giving us the opportunity to really focus in on the eukaryotes which live in these environments.”
Small pond, big life
For their new study, the researchers analyzed samples taken from meltwater ponds in Antarctica. In 2018, Summons and colleagues from New Zealand traveled to a region of the McMurdo Ice Shelf in East Antarctica, known to host small ponds of melted ice, each just a few feet deep and a few meters wide. There, water freezes all the way to the seafloor, in the process trapping dark-colored sediments and marine organisms. Wind-driven loss of ice from the surface creates a sort of conveyer belt that brings this trapped debris to the surface over time, where it absorbs the sun’s warmth, causing ice to melt, while surrounding debris-free ice reflects incoming sunlight, resulting in the formation of shallow meltwater ponds.
The bottom of each pond is lined with mats of microbes that have built up over years to form layers of sticky cellular communities.
“These mats can be a few centimeters thick, colorful, and they can be very clearly layered,” Husain says.
These microbial mats are made up of cyanobacteria, prokaryotic, single-celled photosynthetic organisms that lack a cell nucleus or other organelles. While these ancient microbes are known to survive within some of the the harshest environments on Earth including meltwater ponds, the researchers wanted to know whether eukaryotes — complex organisms that evolved a cell nucleus and other membrane bound organelles — could also weather similarly challenging circumstances. Answering this question would take more than a microscope, as the defining characteristics of the microscopic eukaryotes present among the microbial mats are too subtle to distinguish by eye.
To characterize the eukaryotes, the team analyzed the mats for specific lipids they make called sterols, as well as genetic components called ribosomal ribonucleic acid (rRNA), both of which can be used to identify organisms with varying degrees of specificity. These two independent sets of analyses provided complementary fingerprints for certain eukaryotic groups. As part of the team’s lipid research, they found many sterols and rRNA genes closely associated with specific types of algae, protists, and microscopic animals among the microbial mats. The researchers were able to assess the types and relative abundance of lipids and rRNA genes from pond to pond, and found the ponds hosted a surprising diversity of eukaryotic life.
“No two ponds were alike,” Husain says. “There are repeating casts of characters, but they’re present in different abundances. And we found diverse assemblages of eukaryotes from all the major groups in all the ponds studied. These eukaryotes are the descendants of the eukaryotes that survived the Snowball Earth. This really highlights that meltwater ponds during Snowball Earth could have served as above-ice oases that nurtured the eukaryotic life that enabled the diversification and proliferation of complex life — including us — later on.”
This research was supported, in part, by the NASA Exobiology Program, the Simons Collaboration on the Origins of Life, and a MISTI grant from MIT-New Zealand.
Researchers Ian Hawes of the University of Waikato and Marc Schallenberg of the University of Otago measure the physicochemical conditions of a meltwater pond.
According to the recently published QS World University Rankings 2026, ETH Zurich ranks among the world’s ten best universities once again this year. It took the top spot in continental Europe, with only universities in the US and the United Kingdom ranking higher.
According to the recently published QS World University Rankings 2026, ETH Zurich ranks among the world’s ten best universities once again this year. It took the top spot in continental Europe, with only universities in the US and the United Kingdom ranking higher.
Researchers at MIT and the Scripps Research Institute have shown that they can generate a strong immune response to HIV with just one vaccine dose, by adding two powerful adjuvants — materials that help stimulate the immune system.In a study of mice, the researchers showed that this approach produced a much wider diversity of antibodies against an HIV antigen, compared to the vaccine given on its own or with just one of the adjuvants. The dual-adjuvant vaccine accumulated in the lymph nodes and
Researchers at MIT and the Scripps Research Institute have shown that they can generate a strong immune response to HIV with just one vaccine dose, by adding two powerful adjuvants — materials that help stimulate the immune system.
In a study of mice, the researchers showed that this approach produced a much wider diversity of antibodies against an HIV antigen, compared to the vaccine given on its own or with just one of the adjuvants. The dual-adjuvant vaccine accumulated in the lymph nodes and remained there for up to a month, allowing the immune system to build up a much greater number of antibodies against the HIV protein.
This strategy could lead to the development of vaccines that only need to be given once, for infectious diseases including HIV or SARS-CoV-2, the researchers say.
“This approach is compatible with many protein-based vaccines, so it offers the opportunity to engineer new formulations for these types of vaccines across a wide range of different diseases, such as influenza, SARS-CoV-2, or other pandemic outbreaks,” says J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering at MIT, and a member of the Koch Institute for Integrative Cancer Research and the Ragon Institute of MGH, MIT, and Harvard.
Love and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the study, which appears today in Science Translational Medicine. Kristen Rodrigues PhD ’23 and Yiming Zhang PhD ’25 are the lead authors of the paper.
More powerful vaccines
Most vaccines are delivered along with adjuvants, which help to stimulate a stronger immune response to the antigen. One adjuvant commonly used with protein-based vaccines, including those for hepatitis A and B, is aluminum hydroxide, also known as alum. This adjuvant works by activating the innate immune response, helping the body to form a stronger memory of the vaccine antigen.
Several years ago, Irvine developed another adjuvant based on saponin, an FDA-approved adjuvant derived from the bark of the Chilean soapbark tree. His work showed that nanoparticles containing both saponin and a molecule called MPLA, which promotes inflammation, worked better than saponin on its own. That nanoparticle, known as SMNP, is now being used as an adjuvant for an HIV vaccine that is currently in clinical trials.
Irvine and Love then tried combining alum and SMNP and showed that vaccines containing both of those adjuvants could generate even more powerful immune responses against either HIV or SARS-CoV-2.
In the new paper, the researchers wanted to explore why these two adjuvants work so well together to boost the immune response, specifically the B cell response. B cells produce antibodies that can circulate in the bloodstream and recognize a pathogen if the body is exposed to it again.
For this study, the researchers used an HIV protein called MD39 as their vaccine antigen, and anchored dozens of these proteins to each alum particle, along with SMNP.
After vaccinating mice with these particles, the researchers found that the vaccine accumulated in the lymph nodes — structures where B cells encounter antigens and undergo rapid mutations that generate antibodies with high affinity for a particular antigen. This process takes place within clusters of cells known as germinal centers.
The researchers showed that SMNP and alum helped the HIV antigen to penetrate through the protective layer of cells surrounding the lymph nodes without being broken down into fragments. The adjuvants also helped the antigens to remain intact in the lymph nodes for up to 28 days.
“As a result, the B cells that are cycling in the lymph nodes are constantly being exposed to the antigen over that time period, and they get the chance to refine their solution to the antigen,” Love says.
This approach may mimic what occurs during a natural infection, when antigens can remain in the lymph nodes for weeks, giving the body time to build up an immune response.
Antibody diversity
Single-cell RNA sequencing of B cells from the vaccinated mice revealed that the vaccine containing both adjuvants generated a much more diverse repertoire of B cells and antibodies. Mice that received the dual-adjuvant vaccine produced two to three times more unique B cells than mice that received just one of the adjuvants.
That increase in B cell number and diversity boosts the chances that the vaccine could generate broadly neutralizing antibodies — antibodies that can recognize a variety of strains of a given virus, such as HIV.
“When you think about the immune system sampling all of the possible solutions, the more chances we give it to identify an effective solution, the better,” Love says. “Generating broadly neutralizing antibodies is something that likely requires both the kind of approach that we showed here, to get that strong and diversified response, as well as antigen design to get the right part of the immunogen shown.”
Using these two adjuvants together could also contribute to the development of more potent vaccines against other infectious diseases, with just a single dose.
“What’s potentially powerful about this approach is that you can achieve long-term exposures based on a combination of adjuvants that are already reasonably well-understood, so it doesn’t require a different technology. It’s just combining features of these adjuvants to enable low-dose or potentially even single-dose treatments,” Love says.
The research was funded by the National Institutes of Health; the Koch Institute Support (core) Grant from the National Cancer Institute; the Ragon Institute of MGH, MIT, and Harvard; and the Howard Hughes Medical Institute.
Image shows the vaccine antigen (pink) being concentrated in a germinal center (yellow) within B cell follicles (cyan), triggered by the researchers’ combination adjuvant vaccine.
Researchers at ETH Zurich and Empa have developed a new image sensor made of perovskite. This semiconductor material enables better colour reproduction and fewer image artefacts with less light. Perovskite sensors are also particularly well suited for machine vision.
Researchers at ETH Zurich and Empa have developed a new image sensor made of perovskite. This semiconductor material enables better colour reproduction and fewer image artefacts with less light. Perovskite sensors are also particularly well suited for machine vision.
ETH Zurich has set up a master's programme in mechatronics in collaboration with Ashesi University in Ghana. The first cohort of students is now graduating. The project aims to contribute to sustainable industrialisation in sub-Saharan Africa.
ETH Zurich has set up a master's programme in mechatronics in collaboration with Ashesi University in Ghana. The first cohort of students is now graduating. The project aims to contribute to sustainable industrialisation in sub-Saharan Africa.
Kajal Schiller designed a computer simulation game to measure whether socioeconomic status might determine how likely people are to take advantage of resources that can help them solve problems, with thesis adviser Yael Niv and postdoc Rachel Bedder.
Kajal Schiller designed a computer simulation game to measure whether socioeconomic status might determine how likely people are to take advantage of resources that can help them solve problems, with thesis adviser Yael Niv and postdoc Rachel Bedder.
The advanced semiconductor material gallium nitride will likely be key for the next generation of high-speed communication systems and the power electronics needed for state-of-the-art data centers.Unfortunately, the high cost of gallium nitride (GaN) and the specialization required to incorporate this semiconductor material into conventional electronics have limited its use in commercial applications.Now, researchers from MIT and elsewhere have developed a new fabrication process that integrate
The advanced semiconductor material gallium nitride will likely be key for the next generation of high-speed communication systems and the power electronics needed for state-of-the-art data centers.
Unfortunately, the high cost of gallium nitride (GaN) and the specialization required to incorporate this semiconductor material into conventional electronics have limited its use in commercial applications.
Now, researchers from MIT and elsewhere have developed a new fabrication process that integrates high-performance GaN transistors onto standard silicon CMOS chips in a way that is low-cost and scalable, and compatible with existing semiconductor foundries.
Their method involves building many tiny transistors on the surface of a GaN chip, cutting out each individual transistor, and then bonding just the necessary number of transistors onto a silicon chip using a low-temperature process that preserves the functionality of both materials.
The cost remains minimal since only a tiny amount of GaN material is added to the chip, but the resulting device can receive a significant performance boost from compact, high-speed transistors. In addition, by separating the GaN circuit into discrete transistors that can be spread over the silicon chip, the new technology is able to reduce the temperature of the overall system.
The researchers used this process to fabricate a power amplifier, an essential component in mobile phones, that achieves higher signal strength and efficiencies than devices with silicon transistors. In a smartphone, this could improve call quality, boost wireless bandwidth, enhance connectivity, and extend battery life.
Because their method fits into standard procedures, it could improve electronics that exist today as well as future technologies. Down the road, the new integration scheme could even enable quantum applications, as GaN performs better than silicon at the cryogenic temperatures essential for many types of quantum computing.
“If we can bring the cost down, improve the scalability, and, at the same time, enhance the performance of the electronic device, it is a no-brainer that we should adopt this technology. We’ve combined the best of what exists in silicon with the best possible gallium nitride electronics. These hybrid chips can revolutionize many commercial markets,” says Pradyot Yadav, an MIT graduate student and lead author of a paper on this method.
He is joined on the paper by fellow MIT graduate students Jinchen Wang and Patrick Darmawi-Iskandar; MIT postdoc John Niroula; senior authors Ulrich L. Rohde, a visiting scientist at the Microsystems Technology Laboratories (MTL), and Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and member of MTL; and Tomás Palacios, the Clarence J. LeBel Professor of EECS, and director of MTL; as well as collaborators at Georgia Tech and the Air Force Research Laboratory. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
Swapping transistors
Gallium nitride is the second most widely used semiconductor in the world, just after silicon, and its unique properties make it ideal for applications such as lighting, radar systems and power electronics.
The material has been around for decades and, to get access to its maximum performance, it is important for chips made of GaN to be connected to digital chips made of silicon, also called CMOS chips. To enable this, some integration methods bond GaN transistors onto a CMOS chip by soldering the connections, but this limits how small the GaN transistors can be. The tinier the transistors, the higher the frequency at which they can work.
Other methods integrate an entire gallium nitride wafer on top of a silicon wafer, but using so much material is extremely costly, especially since the GaN is only needed in a few tiny transistors. The rest of the material in the GaN wafer is wasted.
“We wanted to combine the functionality of GaN with the power of digital chips made of silicon, but without having to compromise on either cost of bandwidth. We achieved that by adding super-tiny discrete gallium nitride transistors right on top of the silicon chip,” Yadav explains.
The new chips are the result of a multistep process.
First, a tightly packed collection of miniscule transistors is fabricated across the entire surface of a GaN wafer. Using very fine laser technology, they cut each one down to just the size of the transistor, which is 240 by 410 microns, forming what they call a dielet. (A micron is one millionth of a meter.)
Each transistor is fabricated with tiny copper pillars on top, which they use to bond directly to the copper pillars on the surface of a standard silicon CMOS chip. Copper to copper bonding can be done at temperatures below 400 degrees Celsius, which is low enough to avoid damaging either material.
Current GaN integration techniques require bonds that utilize gold, an expensive material that needs much higher temperatures and stronger bonding forces than copper. Since gold can contaminate the tools used in most semiconductor foundries, it typically requires specialized facilities.
“We wanted a process that was low-cost, low-temperature, and low-force, and copper wins on all of those related to gold. At the same time, it has better conductivity,” Yadav says.
A new tool
To enable the integration process, they created a specialized new tool that can carefully integrate the extremely tiny GaN transistor with the silicon chips. The tool uses a vacuum to hold the dielet as it moves on top of a silicon chip, zeroing in on the copper bonding interface with nanometer precision.
They used advanced microscopy to monitor the interface, and then when the dielet is in the right position, they apply heat and pressure to bond the GaN transistor to the chip.
“For each step in the process, I had to find a new collaborator who knew how to do the technique that I needed, learn from them, and then integrate that into my platform. It was two years of constant learning,” Yadav says.
Once the researchers had perfected the fabrication process, they demonstrated it by developing power amplifiers, which are radio frequency circuits that boost wireless signals.
Their devices achieved higher bandwidth and better gain than devices made with traditional silicon transistors. Each compact chip has an area of less than half a square millimeter.
In addition, because the silicon chip they used in their demonstration is based on Intel 16 22nm FinFET state-of-the-art metallization and passive options, they were able to incorporate components often used in silicon circuits, such as neutralization capacitors. This significantly improved the gain of the amplifier, bringing it one step closer to enabling the next generation of wireless technologies.
“To address the slowdown of Moore’s Law in transistor scaling, heterogeneous integration has emerged as a promising solution for continued system scaling, reduced form factor, improved power efficiency, and cost optimization. Particularly in wireless technology, the tight integration of compound semiconductors with silicon-based wafers is critical to realizing unified systems of front-end integrated circuits, baseband processors, accelerators, and memory for next-generation antennas-to-AI platforms. This work makes a significant advancement by demonstrating 3D integration of multiple GaN chips with silicon CMOS and pushes the boundaries of current technological capabilities,” says Atom Watanabe, a research scientist at IBM who was not involved with this paper.
This work is supported, in part, by the U.S. Department of Defense through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program and CHIMES, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation Program by the Department of Defense and the Defense Advanced Research Projects Agency (DARPA). Fabrication was carried out using facilities at MIT.Nano, the Air Force Research Laboratory, and Georgia Tech.
Researchers have developed a new fabrication process that integrates high-performance gallium nitride transistors onto standard silicon CMOS chips in a way that is low-cost and scalable.
Health
Cuts imperil ‘keys to future health’
Nicole Romero removes biological samples from a freezer at the Chan School of Public Health.Photos by Veasey Conway/Harvard Staff Photographer
Anna Lamb
Harvard Staff Writer
June 17, 2025
5 min read
Chan School scrambles to protect living legacy of landmark Nurses’ Health studies
We all know that smoking is a killer. Postmenopausal women are to
Nicole Romero removes biological samples from a freezer at the Chan School of Public Health.
Photos by Veasey Conway/Harvard Staff Photographer
Anna Lamb
Harvard Staff Writer
5 min read
Chan School scrambles to protect living legacy of landmark Nurses’ Health studies
We all know that smoking is a killer. Postmenopausal women are told by their doctors to maintain a healthy weight to reduce their risk of breast cancer. Trans fats have mostly disappeared from our diets.
These groundbreaking interventions are rooted in the Nurses’ Health Studies, which have tracked data on lives and lifestyles — and taken biological samples — from thousands of participating nurses for decades. Now, federal research funding cuts are putting these efforts in jeopardy.
The studies’ biological samples are stored in a network of high-powered freezers, which must maintain temperatures as low as 170 degrees below Celsius. The freezers are filled with liquid nitrogen and maintained by a small staff of research assistants and managers at the Harvard T.H. Chan School of Public Health and Brigham and Women’s Hospital. Grants to operate the biorepository have been terminated.
“These women have given all they can for us,” said biobank manager Janine Neville-Golden of the volunteers whose blood and tissue are stored in the repository and who have given their time over years to answer questionnaires and undergo observation during life changes such as illness and pregnancy. “We’ve got to protect these right now. They’re keys to future health.”
The Nurses’ Health Study, which was started in 1976 and continued through a second cohort, Nurses’ Health Study II, in the late ’80s, has contributed to breakthroughs in diet research, cancer research, and the understanding of hormones in women’s health. The Nurses’ Health Study 3, launched in 2010, includes different types of health workers and, for the first time, male nurses.
“This is a unique, irreplaceable resource, in many regards. There are other biobanks, but this one is unique in scale and associated data.”
Jorge Chavarro
In 2010, Chan School researchers Jorge Chavarro, Walter Willett, Janet Rich-Edwards, and Stacey Missmer launched the third study in collaboration with investigators at the Channing Division of Network Medicine at Brigham and Women’s Hospital and Harvard Medical School. This initiative was started in conjunction with the Growing Up Today Study (GUTS), which recruits children of Nurses’ Health II participants and has sought to expand on insights gathered through the previous studies.
In light of the funding cuts, collection of samples for both studies has ceased.
“This is a unique, irreplaceable resource, in many regards,” said Chavarro, principal investigator for both Nurses Health III and GUTS. “There are other biobanks, but this one is unique in scale and associated data.”
Chavarro said that while biological samples are relatively easy to capture, “What makes a biobank useful is the fact that you’re able to connect information from those samples to information about the health of people.”
Many of the participants from Nurses Health I and II — who number in the hundreds of thousands — have given multiple biological samples, including urine, cheek swabs, and blood.
“These samples were first collected when people were young and healthy and contain decades worth of information about people’s lifestyles, and follow-ups,” Chavarro said.
The team that operates the biorepository gets hundreds of external requests a year for access to samples.
He noted that the Chan School is working hard to find funding to replace the lost federal grants. But without a long-term solution, the essential liquid nitrogen cannot be purchased, and millions of samples will degrade.
“If there’s not a sustainable mechanism to continue paying for the ongoing operations of the biorepositories, we’re going to lose samples,” he said. “It’s probably not going to be this week, but it is not something that can wait forever.”
Chavarro said that there is also a real risk of losing the team that makes specimen research like his possible.
“Operating a biorepository is not just putting samples in a freezer. It requires a lot of specific technical expertise,” he said. “You need to know, how do you store samples under what conditions and you need somebody if there’s a freezer failure — you need people who know how to respond.”
Neville-Golden, who has been with the biorepository for 16 years, manages a skeleton crew responsible for maintaining and pulling samples for research. If something goes wrong with a freezer in the middle of the night, she’s the one who’s called. For her, the project is very much a human endeavor.
“I’ve learned a lot,” she said, from freezer maintenance to the lofty goals of the researchers she works with to test hypotheses against the samples. “It’s not about the money, it’s about service and the greater good.”
Neville-Golden said that her team — a handful of research assistants and a dedicated project manager, Nicole Romero — gets somewhere in the neighborhood of 200 external requests per year to use data from the cohorts.
“There’s a long waiting line to get access to these samples, just because there’s not enough person-power to be pulling out the samples that people want,” she said.
Their team hand-picks samples out of tens of thousands stored in each freezer, thaws them, and makes them research-ready. It’s hard work, but Neville-Golden said she tries to keep in mind the people who gave the samples, and what they hoped when giving of themselves.
“A couple of them we’ve talked to over the years said when we are at work and we see people who are that ill and going through everything that they’re going through, we want to do whatever we can do to stop that, to make things better, to eliminate the pain, the suffering, that kind of thing,” she said. “So it really has been a labor of love.”
MIT Morningside Academy for Design (MAD) Fellow Caitlin Morris is an architect, artist, researcher, and educator who has studied psychology and used online learning tools to teach herself coding and other skills. She’s a soft-spoken observer, with a keen interest in how people use space and respond to their environments. Combining her observational skills with active community engagement, she works at the intersection of technology, education, and human connection to improve digital learning pla
MIT Morningside Academy for Design (MAD) Fellow Caitlin Morris is an architect, artist, researcher, and educator who has studied psychology and used online learning tools to teach herself coding and other skills. She’s a soft-spoken observer, with a keen interest in how people use space and respond to their environments. Combining her observational skills with active community engagement, she works at the intersection of technology, education, and human connection to improve digital learning platforms.
Morris grew up in rural upstate New York in a family of makers. She learned to sew, cook, and build things with wood at a young age. One of her earlier memories is of a small handsaw she made — with the help of her father, a professional carpenter. It had wooden handles on both sides to make sawing easier for her.
Later, when she needed to learn something, she’d turn to project-based communities, rather than books. She taught herself to code late at night, taking advantage of community-oriented platforms where people answer questions and post sketches, allowing her to see the code behind the objects people made.
“For me, that was this huge, wake-up moment of feeling like there was a path to expression that was not a traditional computer-science classroom,” she says. “I think that’s partly why I feel so passionate about what I’m doing now. That was the big transformation: having that community available in this really personal, project-based way.”
Subsequently, Morris has become involved in community-based learning in diverse ways: She’s a co-organizer of the MIT Media Lab’s Festival of Learning; she leads creative coding community meetups; and she’s been active in the open-source software community development.
“My years of organizing learning and making communities — both in person and online — have shown me firsthand how powerful social interaction can be for motivation and curiosity,” Morris said. “My research is really about identifying which elements of that social magic are most essential, so we can design digital environments that better support those dynamics.”
Even in her artwork, Morris sometimes works with a collective. She’s contributed to the creation of about 10 large art installations that combine movement, sound, imagery, lighting, and other technologies to immerse the visitor in an experience evoking some aspect of nature, such as flowing water, birds in flight, or crowd kinetics. These marvelous installations are commanding and calming at the same time, possibly because they focus the mind, eye, and sometimes the ear.
She did much of this work with New York-based Hypersonic, a company of artists and technologists specializing in large kinetic installations in public spaces. Before that, she earned a BS in psychology and a BS in architectural building sciences from Rensselaer Polytechnic Institute, then an MFA in design and technology from the Parsons School of Design at The New School.
During, in between, after, and sometimes concurrently, she taught design, coding, and other technologies at the high school, undergraduate, and graduate-student levels.
“I think what kind of got me hooked on teaching was that the way I learned as a child was not the same as in the classroom,” Morris explains. “And I later saw this in many of my students. I got the feeling that the normal way of learning things was not working for them. And they thought it was their fault. They just didn’t really feel welcome within the traditional education model.”
Morris says that when she worked with those students, tossing aside tradition and instead saying — “You know, we’re just going to do this animation. Or we’re going to make this design or this website or these graphics, and we’re going to approach it in this totally different way” — she saw people “kind of unlock and be like, ‘Oh my gosh. I never thought I could do that.’
“For me, that was the hook, that’s the magic of it. Because I was coming from that experience of having to figure out those unlock mechanisms for myself, it was really exciting to be able to share them with other people, those unlock moments.”
For her doctoral work with the MIT Media Lab’s Fluid Interfaces Group, she’s focusing on the personal space and emotional gaps associated with learning, particularly online and AI-assisted learning. This research builds on her experience increasing human connection in both physical and virtual learning environments.
“I’m developing a framework that combines AI-driven behavioral analysis with human expert assessment to study social learning dynamics,” she says. “My research investigates how social interaction patterns influence curiosity development and intrinsic motivation in learning, with particular focus on understanding how these dynamics differ between real peers and AI-supported environments.”
The first step in her research is determining which elements of social interaction are not replaceable by an AI-based digital tutor. Following that assessment, her goal is to build a prototype platform for experiential learning.
“I’m creating tools that can simultaneously track observable behaviors — like physical actions, language cues, and interaction patterns — while capturing learners’ subjective experiences through reflection and interviews,” Morris explains. “This approach helps connect what people do with how they feel about their learning experience.
“I aim to make two primary contributions: first, analysis tools for studying social learning dynamics; and second, prototype tools that demonstrate practical approaches for supporting social curiosity in digital learning environments. These contributions could help bridge the gap between the efficiency of digital platforms and the rich social interaction that occurs in effective in-person learning.”
Her goals make Morris a perfect fit for the MIT MAD Fellowship. One statement in MAD’s mission is: “Breaking away from traditional education, we foster creativity, critical thinking, making, and collaboration, exploring a range of dynamic approaches to prepare students for complex, real-world challenges.”
Morris wants to help community organizations deal with the rapid AI-powered changes in education, once she finishes her doctorate in 2026. “What should we do with this ‘physical space versus virtual space’ divide?” she asks. That is the space currently captivating Morris’s thoughts.
MIT Morningside Academy for Design (MAD) Fellow Caitlin Morris is an architect, artist, researcher, and educator who has studied psychology and used online learning tools to teach herself coding and other skills.
During his first year at MIT in 2021, Matthew Caren ’25 received an intriguing email inviting students to apply to become members of the MIT Schwarzman College of Computing’s (SCC) Undergraduate Advisory Group (UAG). He immediately shot off an application.Caren is a jazz musician who majored in computer science and engineering, and minored in music and theater arts. He was drawn to the college because of its focus on the applied intersections between computing, engineering, the arts, and other a
During his first year at MIT in 2021, Matthew Caren ’25 received an intriguing email inviting students to apply to become members of the MIT Schwarzman College of Computing’s (SCC) Undergraduate Advisory Group (UAG). He immediately shot off an application.
Caren is a jazz musician who majored in computer science and engineering, and minored in music and theater arts. He was drawn to the college because of its focus on the applied intersections between computing, engineering, the arts, and other academic pursuits. Caren eagerly joined the UAG and stayed on it all four years at MIT.
First formed in April 2020, the group brings together a committee of around 25 undergraduate students representing a broad swath of both traditional and blended majors in electrical engineering and computer science (EECS) and other computing-related programs. They advise the college’s leadership on issues, offer constructive feedback, and serve as a sounding board for innovative new ideas.
“The ethos of the UAG is the ethos of the college itself,” Caren explains. “If you very intentionally bring together a bunch of smart, interesting, fun-to-be-around people who are all interested in completely diverse things, you'll get some really cool discussions and interactions out of it.”
Along the way, he’s also made “dear” friends and found true colleagues. In the group’s monthly meetings with SCC Dean Dan Huttenlocher and Deputy Dean Asu Ozdaglar, who is also the department head of EECS, UAG members speak openly about challenges in the student experience and offer recommendations to guests from across the Institute, such as faculty who are developing new courses and looking for student input.
“This group is unique in the sense that it’s a direct line of communication to the college’s leadership,” says Caren. “They make time in their insanely busy schedules for us to explain where the holes are, and what students’ needs are, directly from our experiences.”
“The students in the group are keenly interested in computer science and AI, especially how these fields connect with other disciplines. They’re also passionate about MIT and eager to enhance the undergraduate experience. Hearing their perspective is refreshing — their honesty and feedback have been incredibly helpful to me as dean,” says Huttenlocher.
“Meeting with the students each month is a real pleasure. The UAG has been an invaluable space for understanding the student experience more deeply. They engage with computing in diverse ways across MIT, so their input on the curriculum and broader college issues has been insightful,” Ozdaglar says.
UAG program manager Ellen Rushman says that “Asu and Dan have done an amazing job cultivating a space in which students feel safe bringing up things that aren’t positive all the time.” The group’s suggestions are frequently implemented, too.
For example, in 2021, Skidmore, Owings & Merrill, the architects designing the new SCC building, presented their renderings at a UAG meeting to request student feedback. Their original interiors layout offered very few of the hybrid study and meeting booths that are so popular in today’s first floor lobby.
Hearing strong UAG opinions about the sort of open-plan, community-building spaces that students really valued was one of the things that created the change to the current floor plan. “It’s super cool walking into the personalized space and seeing it constantly being in use and always crowded. I actually feel happy when I can’t get a table,” says Caren, who has just ended his tenure as co-chair of the group in preparation for graduation.
Caren’s co-chair, rising senior Julia Schneider, who is double-majoring in artificial intelligence and decision-making and mathematics, joined the UAG as a first-year to understand more about the college’s mission of fostering interdepartmental collaborations.
“Since I am a student in electrical engineering and computer science, but I conduct research in mechanical engineering on robotics, the college’s mission of fostering interdepartmental collaborations and uniting them through computing really spoke to my personal experiences in my first year at MIT,” Schneider says.
During her time on the UAG, members have joined subgroups focused around achieving different programmatic goals of the college, such as curating a public lecture series for the 2025-26 academic year to give MIT students exposure to faculty who conduct research in other disciplines that relate to computing.
At one meeting, after hearing how challenging it is for students to understand all the possible courses to take during their tenure, Schneider and some UAG peers formed a subgroup to find a solution.
The students agreed that some of the best courses they’ve taken at MIT, or pairings of courses that really struck a chord with their interdisciplinary interests, came because they spoke to upperclassmen and got recommendations. “This kind of tribal knowledge doesn’t really permeate to all of MIT,” Schneider explains.
For the last six months, Schneider and the subgroup have been working on a course visualization website, NerdXing, which came out of these discussions.
Guided by Rob Miller, Distinguished Professor of Computer Science in EECS, the subgroup used a dataset of EECS course enrollments over the past decade to develop a different type of tool than MIT students typically use, such as CourseRoad and others.
Miller, who regularly attends the UAG meetings in his role as the education officer for the college’s cross-cutting initiative, Common Ground for Computing Education, comments, “the really cool idea here is to help students find paths that were taken by other people who are like them — not just interested in computer science, but maybe also in biology, or music, or economics, or neuroscience. It's very much in the spirit of the College of Computing — applying data-driven computational methods, in support of students with wide-ranging computational interests.”
Opening the NerdXing pilot, Schneider gave a demo. She explains that if you are a computer science (CS) major and would like to create a visual presenting potential courses for you, after you select your major and a class of interest, you can expand a huge graph presenting all the possible courses your CS peers have taken over the past decade.
She clicked on class 18.404 (Theory of Computation) as the starting class of interest, which led to class 6.7900 (Machine Learning), and then unexpectedly to 21M.302 (Harmony and Counterpoint II), an advanced music class.
“You start to see aggregate statistics that tell you how many students took each course, and you can further pare it down to see the most popular courses in CS or follow lines of red dots between courses to see the typical sequence of classes taken.”
By getting granular on the graph, users begin to see classes that they have probably never heard anyone talking about in their program. “I think that one of the reasons you come to MIT is to be able to take cool stuff exactly like this,” says Schneider.
The tool aims to show students how they can choose classes that go far beyond just filling degree requirements. It’s just one example of how UAG is empowering students to strengthen the college and the experiences it offers them.
“We are MIT students. We have the skills to build solutions,” Schneider says. “This group of people not only brings up ways in which things could be better, but we take it into our own hands to fix things.”
Members of the Undergraduate Advisory Group and MIT Schwarzman College of Computing leadership (left to right) Asu Ozdaglar, Alexandra Volkova, Matthew Caren, Julia Schneider, Claire Mao, Rachel Loh, and Dan Huttenlocher.
Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.This “position bias” means that, if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.MIT researchers have discovered the mechanism behind this phenomenon.They created a theoretical framewor
Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.
This “position bias” means that, if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.
MIT researchers have discovered the mechanism behind this phenomenon.
They created a theoretical framework to study how information flows through the machine-learning architecture that forms the backbone of LLMs. They found that certain design choices which control how the model processes input data can cause position bias.
Their experiments revealed that model architectures, particularly those affecting how information is spread across input words within the model, can give rise to or intensify position bias, and that training data also contribute to the problem.
In addition to pinpointing the origins of position bias, their framework can be used to diagnose and correct it in future model designs.
This could lead to more reliable chatbots that stay on topic during long conversations, medical AI systems that reason more fairly when handling a trove of patient data, and code assistants that pay closer attention to all parts of a program.
“These models are black boxes, so as an LLM user, you probably don’t know that position bias can cause your model to be inconsistent. You just feed it your documents in whatever order you want and expect it to work. But by understanding the underlying mechanism of these black-box models better, we can improve them by addressing these limitations,” says Xinyi Wu, a graduate student in the MIT Institute for Data, Systems, and Society (IDSS) and the Laboratory for Information and Decision Systems (LIDS), and first author of a paper on this research.
Her co-authors include Yifei Wang, an MIT postdoc; and senior authors Stefanie Jegelka, an associate professor of electrical engineering and computer science (EECS) and a member of IDSS and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering, a core faculty member of IDSS, and a principal investigator in LIDS. The research will be presented at the International Conference on Machine Learning.
Analyzing attention
LLMs like Claude, Llama, and GPT-4 are powered by a type of neural network architecture known as a transformer. Transformers are designed to process sequential data, encoding a sentence into chunks called tokens and then learning the relationships between tokens to predict what words comes next.
These models have gotten very good at this because of the attention mechanism, which uses interconnected layers of data processing nodes to make sense of context by allowing tokens to selectively focus on, or attend to, related tokens.
But if every token can attend to every other token in a 30-page document, that quickly becomes computationally intractable. So, when engineers build transformer models, they often employ attention masking techniques which limit the words a token can attend to.
For instance, a causal mask only allows words to attend to those that came before it.
Engineers also use positional encodings to help the model understand the location of each word in a sentence, improving performance.
The MIT researchers built a graph-based theoretical framework to explore how these modeling choices, attention masks and positional encodings, could affect position bias.
“Everything is coupled and tangled within the attention mechanism, so it is very hard to study. Graphs are a flexible language to describe the dependent relationship among words within the attention mechanism and trace them across multiple layers,” Wu says.
Their theoretical analysis suggested that causal masking gives the model an inherent bias toward the beginning of an input, even when that bias doesn’t exist in the data.
If the earlier words are relatively unimportant for a sentence’s meaning, causal masking can cause the transformer to pay more attention to its beginning anyway.
“While it is often true that earlier words and later words in a sentence are more important, if an LLM is used on a task that is not natural language generation, like ranking or information retrieval, these biases can be extremely harmful,” Wu says.
As a model grows, with additional layers of attention mechanism, this bias is amplified because earlier parts of the input are used more frequently in the model’s reasoning process.
They also found that using positional encodings to link words more strongly to nearby words can mitigate position bias. The technique refocuses the model’s attention in the right place, but its effect can be diluted in models with more attention layers.
And these design choices are only one cause of position bias — some can come from training data the model uses to learn how to prioritize words in a sequence.
“If you know your data are biased in a certain way, then you should also finetune your model on top of adjusting your modeling choices,” Wu says.
Lost in the middle
After they’d established a theoretical framework, the researchers performed experiments in which they systematically varied the position of the correct answer in text sequences for an information retrieval task.
The experiments showed a “lost-in-the-middle” phenomenon, where retrieval accuracy followed a U-shaped pattern. Models performed best if the right answer was located at the beginning of the sequence. Performance declined the closer it got to the middle before rebounding a bit if the correct answer was near the end.
Ultimately, their work suggests that using a different masking technique, removing extra layers from the attention mechanism, or strategically employing positional encodings could reduce position bias and improve a model’s accuracy.
“By doing a combination of theory and experiments, we were able to look at the consequences of model design choices that weren’t clear at the time. If you want to use a model in high-stakes applications, you must know when it will work, when it won’t, and why,” Jadbabaie says.
In the future, the researchers want to further explore the effects of positional encodings and study how position bias could be strategically exploited in certain applications.
“These researchers offer a rare theoretical lens into the attention mechanism at the heart of the transformer model. They provide a compelling analysis that clarifies longstanding quirks in transformer behavior, showing that attention mechanisms, especially with causal masks, inherently bias models toward the beginning of sequences. The paper achieves the best of both worlds — mathematical clarity paired with insights that reach into the guts of real-world systems,” says Amin Saberi, professor and director of the Stanford University Center for Computational Market Design, who was not involved with this work.
This research is supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, and an Alexander von Humboldt Professorship.
MIT researchers discovered the underlying cause of position bias, a phenomenon that causes large language models to overemphasize the beginning or end of a document or conversation, while neglecting the middle.
Christine Wenc. Photo by Alexander Andre
Nation & World
Onion holds up mirror; society flashes big smile (with green stuff in teeth)
How some students at University of Wisconsin-Madison created satiric cultural institution
Liz Mineo
Harvard Staff Writer
June 17, 2025
6 min read
The Onion has been making fun of human folly since its founding by two undergrads at the University of
Onion holds up mirror; society flashes big smile (with green stuff in teeth)
How some students at University of Wisconsin-Madison created satiric cultural institution
Liz Mineo
Harvard Staff Writer
6 min read
The Onion has been making fun of human folly since its founding by two undergrads at the University of Wisconsin-Madison in 1988. The mock news site has created satiric pieces so smart some believed them real, others that were just plain silly, and one headline (“‘No Way to Prevent This,’ Says Only Nation Where This Regularly Happens”) that has achieved a dark fame after being reposted after each U.S. mass shooting since 2014 Isla Vista, Calif., attack.
In this edited interview, Christine Wenc, A.M. ’08, talks about her new book “Funny Because It’s True,” on the origins of the newspaper that proclaims itself “America’s Finest News Source.” Wenc spoke about the legacy of The Onion, now based in Chicago, how it created modern news satire, and why it is revered as a cultural institution.
You were part of the original staff of The Onion. What drew you in?
I was 19 years old, and Tim Keck, who founded The Onion in 1988, was my one of my roommates. Tim was a college sophomore and the youngest child of a Midwestern newspaper family. He needed money, and with his friend Chris Johnson, decided to start a college newspaper. When they asked me to join in, I was like, “Sure” because that’s just what you do when you’re 19 years old.
Tim recruited a bunch of humanities majors, and one of them, who was an improv comedian, suggested that the paper should just be all made-up stories, and that’s how it happened. I left a couple of years later, with other staff, to become the editor of Seattle’s alternative weekly, The Stranger, which was also started by Keck.
How did The Onion develop its distinctive satirical voice?
It didn’t find its voice until later on. It was a parody of the National Enquirer in the beginning. The first headline was “Mendota Monster Mauls Madison” about a monster that had been sighted in Lake Mendota, near UW-Madison.
It was in the mid-1990s when a new staff developed that dry, satirical voice we now recognize in The Onion. Around that time, the writers argued about the paper’s mission and decided that it would critique society from a progressive point of view.
“But from the very beginning, The Onion endeavored to make fun of human foolishness. Its motto was ‘Tu Stultus Es,’ ‘You Are Dumb’ in Latin.”
The original members were from Wisconsin and shared a working-class background. Madison is known as the Berkeley of the Midwest, and it was that microclimate, a sort of Midwestern progressive underdog spirit, that infused The Onion’s satire.
But from the very beginning, The Onion endeavored to make fun of human foolishness. Its motto was “Tu Stultus Es,” “You Are Dumb” in Latin.
The Onion’s intent was never to be political. The point was to entertain, but its humor had to follow certain rules, such as we don’t make fun of women, we make fun of sexism; we don’t make fun of Black people, we make fun of racism. Pointing out things like racism and sexism was pointing at humans being stupid.
There have been cases in which The Onion’s satire was not understood, and some readers believed those made-up stories to be true. Was that a concern?
We were always aware that there were people who couldn’t tell the difference between The Onion’s news satire and real news. There are stories that were clearly jokes that were believed to be real.
For example, a piece that the Chinese media picked up about the U.S. Congress demanding to build a new Capitol; another on how the Harry Potter books were responsible for increasing Satanism; and one that announced the opening of a $8 billion Abortionplex by Planned Parenthood. The latter was reposted by a conservative politician.
Satire is a literary art form, and The Onion writers were a bunch of creative, artistic, progressive weirdos who were working outside the system and kept an independent point of view, which allowed them to see things that others couldn’t or didn’t want to see.
For instance, The Onion didn’t fall for the false claim of weapons of mass destruction during the Gulf War; headline after headline, it made fun of it while the mainstream media acted like court stenographers repeating what they were being told by the administration. It turned out that The Onion was correct.
Recent real news headlines look like they have come out of The Onion. What can a satirical newspaper do when reality seems so bizarre?
Right now, the real world indeed seems like an Onion headline because people in the real world are behaving like people in Onion stories. There is a level of respect or propriety that has just been erased in the behavior of a lot of public figures.
On the other hand, I think The Onion’s success lies in its unique kind of humor, which juxtaposes a straight news format that faithfully mimics the dry style of an Associated Press story and ridiculous content. It’s the juxtaposition between the matter-of-fact tone and the crazy stories that makes it funny. What The Onion does is news satire — good fake news — to point at the wrongs of the world to make it better. The Onion never did fake news, which is manufactured as propaganda to sow chaos and make people afraid.
What impact do you think The Onion has had on modern news satire?
“Satire has always used humor to point out the world’s injustices.”
It helped create American modern news satire. Satire has always used humor to point out the world’s injustices. It’s one of the few rhetorical devices that is effective against spin or manipulation. By poking fun at the flaws of humans and society, satire just has a way to expose the absurdities of life. It’s more than telling jokes.
The Onion is held in very high esteem in the comedy world. Many Onion writers have gone to work for comedy shows, including Jon Stewart’s “Daily Show” and “The Colbert Report” and others.
I’m glad to see The Onion alive. First, it was Generation X, which started The Onion, then the millennials behind the Onion News Network, a spoof of CNN and Fox News. Ben Collins, The Onion CEO, keeps saying that The Onion can say stuff in a way that the real news can’t. And that, to me, is part of the progressive tradition that inspired the original founders.
I think that humor and satire can also be survival mechanisms and a morale builder. Which is not nothing, especially in difficult times. Satire says that you are not the only one who thinks what’s going on is ridiculous — and it is always helpful to know you are not alone.
MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.The researchers’ chi
MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
“This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
A new standard
A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
“This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.
One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
Shrinking the circuit
Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
“This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
Their receiver has an active area of less than 0.05 square millimeters.
One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
“Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
This research is supported, in part, by the National Science Foundation.
MIT researchers have designed a compact, low-power, low-cost receiver that would be ideal for battery-powered Internet of Things (IoT) devices like environmental sensors or smart thermostats that need to run continuously for a long time.
Work & Economy
How market reactions to recent U.S. tariffs hint at start of global shift for nation
Christy DeSmith
Harvard Staff Writer
June 17, 2025
8 min read
Economist updates literature on optimal American import-tax rate in world of interconnected trade, investment
President Trump’s tariffs, announced on April 2, upset the global economy in new ways.
“The financial meltdown
How market reactions to recent U.S. tariffs hint at start of global shift for nation
Christy DeSmith
Harvard Staff Writer
8 min read
Economist updates literature on optimal American import-tax rate in world of interconnected trade, investment
President Trump’s tariffs, announced on April 2, upset the global economy in new ways.
“The financial meltdown they triggered was really striking,” said Oleg Itskhoki, a professor of economics. “What happened to the stock market, what happened to bond yields, what happened to the dollar exchange rate. They’re all connected. You can’t study tariffs anymore without considering what happens in the financial market.”
In a new working paper, Itskhoki and longtime collaborator Dmitry Mukhin of the London School of Economics explore what they call “the optimal macro tariff,” or the import tax rate most favorable to U.S. economic interests. The international macroeconomists are known for their work detailing how today’s globalized financial market drives currency valuations. Now they’ve expanded that approach to study tariffs for a variety of U.S. policy objectives.
The academic literature was due for an update. The last time the world saw tariffs on this scale was the 1930s, when countries including the U.S. sought to protect jobs amid the Great Depression’s high unemployment.
“There was no wave of protectionism after the Great Financial Crisis of 2008 and ’09, when unemployment in the U.S. exceeded 10 percent,” Itskhoki noted. “It seemed like the developed world had shifted to an equilibrium without tariffs.”
We sat down with Itskhoki recently to ask how tariffs function in a world of deeply interconnected trade and investment. The conversation was edited for length and clarity.
What, exactly, is an “optimal macro tariff”?
The economic literature on optimal tariffs typically asks, “What policy gives a country the most favorable terms of trade with the rest of the world?” That literature typically assumes trade balance, but the last time the U.S. had anything close to balanced trade was 1991 or ’92.
At the same time, macroeconomists tend to think about trade imbalances without considering tariffs too much. What we do in this paper is combine the two.
You’ve bridged two very different traditions within economics.
Yes. Because we’ve seen not only the globalization of trade, we’ve also seen the globalization of financial markets with countries holding large portfolios of foreign assets. As it turns out, this is consequential for the optimal tariff.
Your paper focuses on optimal macro tariffs for the U.S. What should we know about the country’s place in the global economy at this very moment?
Macroeconomic research over the last 20 years focused on what is sometimes called the “exorbitant privilege” of the United States.
The country may have had a persistent trade deficit and consequently accumulated fewer assets than liabilities. But its foreign assets tended to be of the riskier sort, like direct foreign investments and portfolio holdings. They generated high returns relative to liabilities — which are, to a large extent, U.S. Treasuries.
And the federal government enjoyed paying low returns on U.S. Treasuries until recently because they were viewed as the world’s safest asset. That’s what allowed the U.S. to run a trade deficit and the U.S. government to run a large fiscal deficit without dire financial consequences.
Higher interest rates mean that required yields on U.S. Treasuries are now quite high, so the government can no longer borrow cheaply. Interest rate payments on federal debt are now around half the country’s massive fiscal deficit, by itself larger than the trade deficit.
We typically see developing countries going into periods of big trade deficits and big fiscal deficits. It’s very unusual to find the world’s dominant country in this position.
Oleg Itskhoki.
Veasey Conway/Harvard Staff Photographer
What happens when you add tariffs to the mix?
The dollar appreciated, in line with theoretical predictions, with most previous tariff announcements. With tariffs, Americans buy fewer imports. Less foreign exchange is needed to pay for them, so there are more dollars left over, and the currency becomes stronger. That, in turn, hurts U.S. exporters, because American goods became more expensive overseas. Hence, foreigners buy less, resulting in a new equilibrium with less trade on both sides.
In addition, dollar appreciations are akin to a financial transfer from the U.S. to the parts of the world that hold U.S. assets — the so-called “valuation effects.”
Therefore, in a financially globalized world, the optimal tariff for the U.S. is smaller than in previous eras. Furthermore, holdings of U.S. assets offer an effective insurance for countries like China and Japan against a possible trade war with the U.S.
But that’s not what played out after April 2. Instead, we saw a depreciating U.S. dollar. Why?
This was surprising indeed. Dollar depreciation happened along with a large meltdown in the U.S. stock market and increasing yields on U.S. Treasuries. At first there was a theory that foreigners were dumping Treasuries. But in reality, there was not much else for them to buy. Maybe they wanted to sell U.S. Treasuries and buy, say, German Bunds of equal quality. In reality, there are 10 times fewer German bunds than U.S. Treasuries out there today, making it difficult to shift portfolios away from the U.S. assets.
Instead what we saw was a clear turn in the currency market. In the past, Asian investors in particular but also European investors to some extent were willing to buy U.S. Treasuries without holding currency insurance. The market expected the U.S. dollar to always appreciate in bad economic times. But April 2 was the first time the dollar massively depreciated in bad times, as global markets turned to pessimism on the announcement of the trade war.
The U.S. dollar now resembles the British pound following the 2016 Brexit vote. Before April 2, Japanese pension funds, for example, may have been willing to hold U.S. assets without buying currency insurance. Now they want to sell that risk of U.S. dollar depreciation to the market. And the required premium for selling that risk resulted in a weaker dollar.
Has the U.S. benefited at all from the trade war?
Well, the U.S. is collecting tariff revenues. But for these very immediate, and very small, monetary gains, the government has potentially triggered a much bigger process that will eliminate some of the benefits the country has enjoyed. I mean, you can call French President Emmanuel Macron or U.K. Prime Minister Keir Starmer to negotiate on tariffs. But you cannot call up the financial market and tell it to have faith in the dollar.
“It doesn’t mean the U.S. will immediately lose its central place in the global financial market, but it is clear that the tariffs marked the start of some sort of realignment.”
It doesn’t mean the U.S. will immediately lose its central place in the global financial market, but it is clear that the tariffs marked the start of some sort of realignment. The fact that we’re discussing a tax bill that is meant to increase the deficit, in this environment, is just insane.
What should laypeople know about the model you’ve constructed to study optimal macro tariffs?
The model provides a formalized environment where you can ask questions and get coherent answers. You can play around with different objectives, like raising revenue or boosting manufacturing employment. We found that indeed there is an optimal tariff for the U.S. — somewhere between 25 and 35 percent — if one ignores the financial market.
But even then, it only works if the government convinces the rest of the world not to retaliate, because there’s a much bigger loss if everybody starts doing tariffs. That’s pretty much how the world lived before the Second World War, before all that collective effort was done to bring down tariffs.
Suddenly, the very myopic optimal tariff has won the day once more. Once we factor in the financial market, the optimal tariff is actually much smaller, at something like 9 percent. And this only takes into account the direct financial losses from valuation effects, without capturing the consequences of the U.S. losing its dominance in the global financial market.
You mentioned playing around with different policy objectives. According to your model, what is the optimal tariff for boosting employment in U.S. manufacturing?
We really thought there would be an optimal tariff for manufacturing employment. It shows you how biased we are. Because make no mistake, tariffs are a trade tax. They reduce the size of the tradable sector, meaning they reduce both imports and exports. It’s true that increased trade with China hurt U.S. manufacturing. But today, a tariff on trade with China will hurt U.S. manufacturing even further.
If the goal is boosting tradable employment, what you actually want are subsidies. Maybe particular regions are targeted. Maybe we decide that certain industries are important — for security, for defense, for maintaining our technological leadership. Ideally, U.S. society would need to decide through the democratic process what activities to subsidize within a balanced budget, given the high costs of borrowing right now. But we are obviously very far from this ideal.
Photos by Veasey Conway/Harvard Staff Photographer
Campus & Community
‘Truly the best’
Clea Simon
Harvard Correspondent
June 17, 2025
7 min read
65 staffers honored as ‘Harvard Heroes’ for ‘exemplary’ service to mission
The mood was joyous as family and friends packed into Sanders Theatre celebrated 65 “Harvard Heroes” from across the University on Thursday. Nominated and selected by
65 staffers honored as ‘Harvard Heroes’ for ‘exemplary’ service to mission
The mood was joyous as family and friends packed into Sanders Theatre celebrated 65 “Harvard Heroes” from across the University on Thursday. Nominated and selected by their peers, these staff members were introduced by the heads of their departments, divisions, or Schools. Their achievements were highlighted in brief and often touchingly personal remarks by President Alan M. Garber.
Processing in the theater to the sounds of Janelle Monae’s cover of David Bowie’s “Heroes,” the 65 honorees were hailed by Executive Vice President Meredith Weenick as “truly the best.”
“In many ways, large and small, these individuals go above and beyond in service of Harvard’s mission and in support of its people,” she said.
Following an introductory video, which noted, among other facts, that only one-half of 1 percent of Harvard employees are named Harvard Heroes, Vice President for Human Resources Manuel Cuevas-Trisán praised the honorees’ “exemplary efforts.”
Garber, who entered to a prolonged standing ovation, thanked the assembled honorees for their service: “You are the ones who don’t give up, who keep showing up, because you believe in the importance of making real, positive changes — for students, for colleagues, and for the wider world.”
Alumni Affairs and Development
Honorees:Kelly Hahn, Director of Content Strategy; John Prince, Associate Director, Reunions and Classes
Cheers greeted Garber’s highlighting of Hahn’s accomplishments when he interrupted his scripted remarks praising her work creating a centralized resource “amidst complex current events” to note, “There’s an understatement!”
Harvard Graduate School of Education
Honorees: Andrea Le, Associate Director, Community Building and International Student Support; Allison Pingree, Associate Director, Instructional Support and Development; Faina Gould, Senior Research Development Manager
“You’ve been the best kind of action hero for vital research initiatives in a quickly evolving grants landscape,” said Garber, praising Gould.
Harvard Business School
Honorees: Madeline Meehan, Director, Campus Activation; Katia Muser, Senior Director, Software Delivery Excellence; Robin Smith, Senior Manager Support Services
Noting Smith’s extracurricular work as “a theater actor,” Garber concluded “your empathetic and empowering style takes center stage.”
Harvard Public Affairs and Communications
Honoree: Senior Writer Alvin Powell
“Thank you for chronicling Harvard’s history,” said Garber, “one article at a time.”
Campus Services
Honorees:Timothy Allen, Crew Chief A; Associate Director Matthew Civittolo; Generator Mechanic/Working Foreman Aaron Mayerson; Associate Director of Biosafety Angela Reid; Crimson Catering and Event Services Director Kyle Ronayne; Senior Accounting Manager Angelina Yun
Noting that “Commencement and Reunions require 121 tents, 5,859 tables, and 66,740 chairs,” in his address to Ronayne, Garber concluded: “Everything falls into place because of just one of you.”
Harvard Federal Credit Union
Honoree: Lorraine Gadsby, Lead Member Services Representative
Praising her “pragmatic approach to tough situations” and “genuine consideration for others,” over nearly 40 years at Harvard, Garber also saluted Gadsby’s “thoughtful Saturday pastries.”
President Alan M. Garber offered personal remarks for each honoree.
“Harvard Heroes” are nominated from across the University by their peers.
An enthusiastic audience.
Harvard Medical School
Honorees: Safiya Bobb, Associate Director, Program Operations Executive; Susanne Churchill, Executive Director for Biomedical Informatics; Faculty Affairs Academic Appointments Manager Mindy Dellert; Christina Kennedy, Strategic Projects Manager for the Office of Research Administration; Veronica Leo, Program Manager for Academic Advancement; Jennifer Puccetti, Executive Director, Global Health and Social Medicine; Livia Rizzo, MEDscience Senior Associate Director
Indulging in a little word play as he highlighted Bobb’s service, Garber said, “Because your leadership balances business goals with staff needs, the Asynchronous Operations team is always in sync.”
Harvard School of Dental Medicine
Honorees: Academic Societies Coordinator Adrien Doherty; Carrie Sylven, Director of Student Affairs
Honorees: Environmental Health Project Coordinator Jeffrey Adams; Henrique Coelho, Assistant Director for PPCR Program Administration; Assistant Director, MPH Generalist and Cross-MPH Programming Megan Kerin; Immunology and Infectious Diseases Director of Administration Marie Richard
Highlighting Adams, “a public health superman,” Garber noted his ability to “neutralize longstanding Institutional Review Board challenges with a single email.”
Harvard University Health Services
Honorees: Jason Ward, Director of Health Plan Operations and Member Services; Marie Haley, Primary Care Physician/PCP Team Leader
Calling Ward “the Tom Brady of Member Services,” Garber concluded “peers, patients, and providers are glad you’re always on the ball.”
Harvard Graduate School of Design
Honoree: Keith Gnoza, Director of Financial Assistance/Assistant Director of Student Services
Gnoza’s “genuine empathy and engagement through the Student Emergency Fund reassure and support students experiencing unexpected hardship,” said Garber.
Harvard Divinity School
Honoree: Senior Graphic Designer Kristie Welsh
Garber praised Welsh’s “brilliant designs,” noting, “through your commitment and creativity, the School shares its scholarship and good work with audiences far and wide.”
Faculty of Arts and Sciences
Honorees: Associate Dean of Students Lauren Brandt; Ethan Contini-Field, Manager of Asynchronous Course Development; Front Office Manager Kai Crull; Physics Executive Director Despina Bokios; IT Client Support Services Associate Roy Guyton; Business Systems Analyst Raiyan Huq; Magdelena Kenar, Associate Director of Faculty Support Services; Men’s and Women’s Cross Country/Track and Field Associate Head Coach Marc Mangiacotti; Alta Mauro, Associate Dean of Students for Inclusion and Belonging; Paul Rattigan, Senior Concert Piano Technician; Rachel Rockenmacher, Executive Director of the Center for Jewish Studies; Sheila Thimba, Dean for Administration and Finance; Lu Wang, Assistant Director of Undergraduate Studies; Lawrence White, Operations Director for Neuroimaging at the Center for Brain Science
In addition to serious praise for all the honorees, Garber noted Crull’s “outsized hospitality for Remy the cat,” a campus feline fixture.
Harvard Radcliffe Institute
Honoree: Amanda Lubniewski, Head of Student Engagement
“With authenticity, empathy, and attentiveness, you give students indelible experiences,” said Garber.
John A. Paulson School of Engineering and Applied Sciences
Honoree: Leslie Schaffer, Associate Dean for Finance
“You turn ambitious ideas into reality,” said Garber.
Harvard Law School
Honorees: Emily Newburger, Executive Editor of the Harvard Law Bulletin; Jacqueline Calahong, Staff Assistant, Emmett Environmental Law and Policy Clinic
Praising Calahong, Garber said, “In all you do, you’re building a better climate for the clinic and, ultimately, for the Earth.”
Harvard Library
Honorees: Research Data Services Librarian Julie Goldman; Sarah Hoke, Librarian for Collection Development Management; Juliana Kuipers, Associate University Archivist for Collection Development and Records Management Services
In his commendation of Hoke, Garber said, “With practical positivity and a heart for service, you’re expanding access to engaging library collections.”
Harvard University Information Technology
Honorees: David Heitmeyer, Director, Academic Platform Development; Senior Technical Support Engineer Cheryl Johnson; David Sobel, Associate Director for FAS Technology Strategy and Planning
Highlighting Heitmeyer’s 25 years of service, which includes introducing international students to “barbecue and Dr. Pepper,” Garber said: “Building a sense of belonging is in your core architecture.”
Harvard Kennedy School
Honorees: Financial Associate Dawn Hannon; Community Engagement Librarian Alessandra Seiter
“When challenges add up, colleagues are grateful they can count on you,” Garber said of Hannon.
Financial Administration
Honorees: Manager of Administration and Operations Alissa Beideck Landry; Portfolio Team Manager Rebecca Looman
Citing Landry’s oversight of the department’s recent move, Garber said, “Your adaptability, selflessness, and poise under pressure inspire colleagues to follow your lead.”
Memorial Church
Honoree: Director of Finance and Operations Charles Anderson
Garber called Anderson “a blessing to the Mem Church mission of educating minds, expanding hearts, and enriching lives.”
“In such a dynamic, demanding place, many people — including this president — are grateful that your steady presence keeps things sailing smoothly,” said Garber.
Office of the Vice Provost for Advances in Learning
Honoree: Zachary Wang, Director of Strategic Technology
Naming Wang’s work launching the Learning Experience Platform, Garber said: “Your tireless efforts are revolutionizing teaching and learning University-wide.”
Human Resources
Honoree: Kristina Paolini, Talent Acquisition and Outreach for Human Resources
“Staffing a world-class institution is demanding work, but your KSA’s” — knowledge, skills, and abilities — “are leading the way,” said Garber, before exhorting all the honorees to rise for yet another round of enthusiastic cheers and applause.
Campus & Community
Projects help students ‘build bridges’ across differences
Julie McDonough
Harvard Correspondent
June 17, 2025
long read
Online games and small group discussions provide opportunities for people with contrasting points of view to engage
Funded through the President’s Building Bridges Fund — an initiative started last fall to respond to the preliminary recommendatio
Projects help students ‘build bridges’ across differences
Julie McDonough
Harvard Correspondent
long read
Online games and small group discussions provide opportunities for people with contrasting points of view to engage
Funded through the President’s Building Bridges Fund — an initiative started last fall to respond to the preliminary recommendations of the Presidential Task Forces — four student groups launched projects this spring to foster constructive dialogue and build relationships across differences. The projects are part of a larger, University-wide effort to promote dialogue across difference.
The student organizers — from Harvard College, Harvard Law School, and the Kenneth C. Griffin Graduate School of Arts and Sciences — took varying approaches to engaging fellow students in meaningful conversations on challenging topics and creating frameworks for productive discussions.
“When we established the fund, we hoped that we’d attract pilot projects that would spark deep and meaningful conversations across campus,” said President Alan M. Garber. “We want to ensure that all voices are heard at Harvard, and this first round of efforts, launched by accomplished student leaders, gave members of our community opportunities not only to engage in constructive dialogue but also to develop skills that will serve them well in many other encounters.”
“When we established the fund, we hoped that we’d attract pilot projects that would spark deep and meaningful conversations across campus.”
Alan M. Garber
Awarded funding in February to implement projects by the end of the academic year, the students quickly organized speakers, set up logistics, and worked with faculty and staff for guidance and assistance.
Here is a closer look at the projects completed so far.
The Policy Bridges Project
The idea for the project came from a discussion within the Harvard Griffin GSAS Science Policy Group.The group was searching for a way to enable substantive policy discussions and ideas around divisive issues that demand action. When the Building Bridges funding was announced, they saw an opportunity to engage fellow students on these topics with the hope of finding pathways forward.
The Ph.D. student organizers Arya Kaul (bioinformatics and integrative genomics), Lissah Johnson (biological sciences in public health), and Mia Sievers (biological and biomedical sciences) organized two events: a Climate Policy Fireside Chat and a Technology Policy Panel. The format for both events included inviting outside speakers to share their opinions and expertise, while allowing time for discussion and more informal conversation at post-event gatherings.
The Climate Policy Fireside Chat featured Undersecretary Katherine Antos from the Massachusetts Executive Office of Environmental Affairs. The discussion focused on how to move climate policy forward by identifying shared values among groups with different viewpoints.
“We learned that it is possible to make progress on climate issues when you search for common values,” said Kaul. “Forming partnerships with groups that have different viewpoints is possible when you understand what is important to them.”
Betsy Miller.
Photo by Ricardo Lopez
Betsy Miller, Will Rinehart, and Bruce Schneier.
Photo by Ricardo Lopez
The Technology Policy Panel featured Bruce Schneier, lecturer in public policy at the Harvard Kennedy School and a fellow at the Berkman Klein Center for Internet & Society, and Will Rinehart, senior fellow at the American Enterprise Institute and expert at the Federalist Society’s Emerging Technology Working Group, as panelists. The highlight of the panel was the exploration of “polarity thinking” with moderator Betsy Miller.
“It is very helpful to have a shared language and framework for provocative discussions like these,” said Miller, lecturer on law at Harvard Law School. “Polarities are opposites where the benefits of both are needed over time to succeed. Polarities help us move from ‘either/or’ binary thinking to a ‘both/and’ mindset. It’s a powerful tool for engaging across difference and having difficult conversations with curiosity and respect.”
With two events complete, the hope is to continue to host discussions on a variety of policy topics.
“We would love to be able to explore different areas, like healthcare policy, to find those shared values that help us to make progress,” said Johnson. “And with the polarity-thinking framework, we can have those hard conversations in ways that are productive.”
Tango Project
With the announcement of the President’s Building Bridges Fund grant program, Lucas Woodley, a Ph.D. student in psychology at the Griffin GSAS, who works closely with Psychology Professor Joshua Greene in his lab, saw an opportunity for a large-scale deployment of Tango at Harvard. The online game developed in the lab over the past five years promotes openness, respect, and connection across lines of division via a cooperative online quiz game. Here’s how it works:
Online, 20-minute games are scheduled for a set date and time.
On the set date and time, players log in and are paired with an online partner anonymously.
Together, the student pairs answer quiz questions ranging from pop culture to Harvard history to politically charged topics.
Funding through the President’s Building Bridges Fund allowed Woodley, in collaboration with Greene, to coordinate an event across the Harvard community with incentives for participation. The winning pair received Celtics playoffs tickets, and the winning House got $1,000 for their activities fund.
Participating students self-identified as varying grades of liberal or conservative. They were paired randomly with their partners, often playing with someone of differing views. The game is designed to help people with contrasting worldviews learn to trust and respect each other. “What we find is that students have so much fun playing the game,” said Woodley, noting that half of the players gave Tango a perfect 10 out of 10 for enjoyment. “At the same time, they learn how to engage with others on really challenging topics.”
Lucas Woodley (left) and Joshua D. Greene.
Photo by Dylan Goodman
Greene and Woodley recently published the results of the Tango experiment in Nature Human Behavior.
“Tango has so many beneficial outcomes for students in terms of making connections across differences,” said Greene. “Research from the Tango event at Harvard showed that after playing, students were significantly more interested in getting to know students with different viewpoints and significantly more comfortable voicing controversial views on campus.”
Greene and Woodley hope that Tango can be used regularly at moments when large groups of students come together at Harvard, such as at first-year orientation.
Hate & Remediation: Where Does Harvard Go from Here?
The project organized by students at Harvard Law School aimed to spark dialogue among students with different cultural and religious backgrounds. Recognizing the need for a space where students could come together to talk about issues on campus, the Jewish and Muslim student organizers sponsored a discussion examining the nature of hate and how to meaningfully address it on campus.
Randall Kennedy (left) and Noah Feldman.
Photo by Lorin Granger
The panel discussion included Noah Feldman, Felix Frankfurter Professor of Law, chair of the Society of Fellows, and founding director of the Julis-Rabinowitz Program on Jewish and Israeli Law at Harvard Law School, and Randall Kennedy, Michael R. Klein Professor at Harvard Law School. Given the then-expected release of the final reports from the Presidential Task Forces on Antisemitism and Anti-Israeli Bias and Anti-Muslim, Anti-Arab, and Anti-Palestinian Bias, they framed the talk as an opportunity for students to think beyond the standard legal remedies and to consider how they as a community could move forward.
More than 100 students participated in the lecture and follow-up discussion. “We worked really hard to create an event where people felt comfortable and welcome,” said Omar Tariq, a second-year law student. “We had students from a wide range of political, religious, and cultural backgrounds. Students expressed that they’ve been looking for more meaningful discussion of the issues we’re facing on campus and were appreciative of our event.”
The event focused on three goals: building relationships across divergent affinity groups; acting against discrimination, bullying, harassment, or hate; and fostering constructive dialogue. The students felt they achieved their goals given the turnout, discussion, and feedback afterward. The strong relationships built while planning the event provides a solid foundation for future events and collaborations, they said.
“This event was a good opportunity for people coming in with different beliefs and ideas to come together with a desire to make campus better. We got to explore how even though we may disagree deeply, it doesn’t mean we can’t also try to make things better for everyone on campus,” said Shanee Markovitz Kay, a second-year law student. “If you have a shared positive goal, there is a lot of good that can come out of it.”
Questions left unanswered
When the President’s Building Bridges Fund was announced, Harvard College students Irati Evworo Diez ’25, Noa Horowitz ’25, and Ari Kohn ’26 saw it not only as an opportunity to bring together a diverse mix of their classmates, but also as a chance to take advantage of the intellect and expertise of their professors. Following an application process that garnered significant interest, students were selected to take part in a series of dinner conversations. The organizers of the project deliberately chose students from diverse backgrounds, beliefs, and viewpoints with the intention of sparking serious, meaningful, and challenging conversations about issues facing Harvard and the world.
“Sometimes it is difficult to have these types of conversations inside the classroom for a variety of reasons,” said Evworo Diez. “We wanted to give students the chance to have these tough conversations in an intimate setting with a select group of students who would really engage on these challenging topics.”
Post dinner photo with students and faculty guests James Wood and Claire Messud.
Photo by Ari Kohn
The general structure of the dinners was to have Harvard professors and other subject-matter experts attend, share their expertise or points of view on a topic, and then have students engage with each other in conversation and debate. It was meant to be the best of a Harvard seminar with its rigor and depth, but in a more intimate and comfortable setting. The topics ranged from economics to campus life to gender and beyond. The feedback was incredibly positive.
“As student organizers, we were grateful President Garber initiated the fund so that we could provide this experience to students and begin to shift the culture on campus around engaging in constructive dialogue across difference,” said Evworo Diez. “Some students said it was one of the most meaningful experiences they had at Harvard.”
The hope is that the dinner series could continue or could be scaled to a GenEd class so that more students have the opportunity to participate in this type of experience.
“This dinner series gave me a lot of faith in Harvard’s capacity to do this work,” said Evworo Diez. “The current media portrayal of Harvard is so different from what is actually happening here. There is diversity at Harvard in the perspectives that students bring to the table — from their background, faith, culture, political viewpoints, and life experiences. Students and faculty were thankful to be forced to have these hard conversations and they were ready to engage.”
“Full participation and belonging don’t happen by accident; they are cultivated through intentional acts of courage, curiosity, and compassion,” said Sherri Charleston, chief community and campus life officer. “The Building Bridges grantees have shown us that true excellence emerges not in the absence of difference, but in the embrace of it. Their work reminds us that when we create spaces for respectful dialogue, we’re not just exchanging ideas. Together, we’re building the foundation for a Harvard where everyone can thrive.”
Gaspare LoDuca has been appointed MIT’s vice president for information systems and technology (IS&T) and chief information officer, effective Aug. 18. Currently vice president for information technology and CIO at Columbia University, LoDuca has held IT leadership roles in or related to higher education for more than two decades. He succeeds Mark Silis, who led IS&T from 2019 until 2024, when he left MIT to return to the entrepreneurial ecosystem in the San Francisco Bay area.Executive V
Gaspare LoDuca has been appointed MIT’s vice president for information systems and technology (IS&T) and chief information officer, effective Aug. 18. Currently vice president for information technology and CIO at Columbia University, LoDuca has held IT leadership roles in or related to higher education for more than two decades. He succeeds Mark Silis, who led IS&T from 2019 until 2024, when he left MIT to return to the entrepreneurial ecosystem in the San Francisco Bay area.
Executive Vice President and Treasurer Glen Shor announced the appointment today in an email to MIT faculty and staff.
“I believe that Gaspare will be an incredible asset to MIT, bringing wide-ranging experience supporting faculty, researchers, staff, and students and a highly collaborative style,” says Shor. “He is eager to start his work with our talented IS&T team to chart and implement their contributions to the future of information technology at MIT.”
LoDuca will lead the IS&T organization and oversee MIT’s information technology infrastructure and services that support its research and academic enterprise across student and administrative systems, network operations, cloud services, cybersecurity, and customer support. As co-chair of the Information Technology Governance Committee, he will guide the development of IT policy and strategy at the Institute. He will also play a key role in MIT’s effort to modernize its business processes and administrative systems, workingin close collaboration with the Business and Digital Transformation Office.
“Gaspare brings to his new role extensive experience leading a complex IT organization,” says Provost Cynthia Barnhart, who served as one of Shor's advisors during the search process. “His depth of experience, coupled with his vision for the future state of information technology and digital transformation at MIT, are compelling, and I am excited to see the positive impact he will have here.”
“As I start my new role, I plan to learn more about MIT’s culture and community to ensure that any decisions or changes we make are shaped by the community’s needs and carried out in a way that fits the culture. I’m also looking forward to learning more about the research and work being done by students and faculty to advance MIT’s mission. It’s inspiring, and I’m eager to support their success,” says LoDuca.
In his role at Columbia, LoDuca has overseen the IT department, headed IT governance committees for school and department-level IT functions, and ensured the secure operation of the university’s enterprise-class systems since 2015. During his tenure, he has crafted a culture of customer service and innovation — building a new student information system, identifying emerging technologies for use in classrooms and labs, and creating a data-sharing platform for university researchers and a grants dashboard for principal investigators. He also revamped Columbia’s technology infrastructure and implemented tools to ensure the security and reliability of its technology resources.
Before joining Columbia, LoDuca was the technology managing director for the education practice at Accenture from 1998 to 2015. In that role, he helped universities to develop and implement technology strategies and adopt modern applications and systems. His projects included overseeing the implementation of finance, human resources, and student administration systems for clients such as Columbia University, University of Miami, Carnegie Mellon University, the University System of Georgia, and Yale University.
“At a research institution, there’s a wide range of activities happening every day, and our job in IT is to support them all while also managing cybersecurity risks. We need to be creative and thoughtful in our solutions, and consider the needs and expectations of our community,” he says.
LoDuca holds a bachelor’s degree in chemical engineering from Michigan State University. He and his wife are recent empty nesters, and are in the process of relocating to Boston.
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial. Superconducting electronics have arisen as a promising alternative
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
This work was carried out, in part, through the use of MIT.nano’s facilities.
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial. Superconducting electronics have arisen as a promising alternative
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
This work was carried out, in part, through the use of MIT.nano’s facilities.
The successful Cambridge grantees’ work covers a range of research areas, including the development of next-generation semiconductors, new methods to identify dyslexia in young children, how diseases spread between humans and animals, and the early changes that happen in cells before breast cancer develops, with the goal of finding ways to stop the disease before it starts.
The funding, worth €721 million in total, will go to 281 leading researchers across Europe. The Advanced Grant competition
The successful Cambridge grantees’ work covers a range of research areas, including the development of next-generation semiconductors, new methods to identify dyslexia in young children, how diseases spread between humans and animals, and the early changes that happen in cells before breast cancer develops, with the goal of finding ways to stop the disease before it starts.
The funding, worth €721 million in total, will go to 281 leading researchers across Europe. The Advanced Grant competition is one of the most prestigious and competitive funding schemes in the EU and associated countries, including the UK. It gives senior researchers the opportunity to pursue ambitious, curiosity-driven projects that could lead to major scientific breakthroughs. Advanced Grants may be awarded up to € 2.5 million for a period of five years. The grants are part of the EU’s Horizon Europe programme. The UK agreed a deal to associate to Horizon Europe in September 2023.
This competition attracted 2,534 proposals, which were reviewed by panels of internationally renowned researchers. Over 11% of proposals were selected for funding. Estimates show that the grants will create approximately 2,700 jobs in the teams of new grantees. The new grantees will be based at universities and research centres in 23 EU Member States and associated countries, notably in the UK (56 grants), Germany (35), Italy (25), the Netherlands (24), and France (23).
“Many congratulations to our Cambridge colleagues on these prestigious ERC funding awards,” said Professor Sir John Aston, Cambridge’s Pro-Vice-Chancellor for Research. “This type of long-term funding is invaluable, allowing senior researchers the time and space to develop potential solutions for some of biggest challenges we face. We are so fortunate at Cambridge to have so many world-leading researchers across a range of disciplines, and I look forward to seeing the outcomes of their work.”
The Cambridge recipients of 2025 Advanced Grants are:
Professor Clare Bryant (Department of Veterinary Medicine) for investigating human and avian pattern recognition receptor activation of cell death pathways, and the impact on the host inflammatory response to zoonotic infections.
Professor Sir Richard Friend (Cavendish Laboratory/St John’s College) for bright high-spin molecular semiconductors.
Professor Usha Goswami (Department of Psychology/St John’s College) for a cross-language approach to the early identification of dyslexia and developmental language disorder using speech production measures with children.
Professor Regina Grafe (Faculty of History) for colonial credit and financial diversity in the Global South: Spanish America 1600-1820.
Professor Judy Hirst (MRC Mitochondrial Biology Unit/Corpus Christi College) for the energy-converting mechanism of a modular biomachine: Uniting structure and function to establish the engineering principles of respiratory complex I.
Professor Matthew Juniper (Department of Engineering/Trinity College) for adjoint-accelerated inference and optimisation methods.
Professor Walid Khaled (Department of Pharmacology/Magdalene College) for understanding precancerous changes in breast cancer for the development of therapeutic interceptions.
Professor Adrian Liston (Department of Pathology/St Catharine’s College) for dissecting the code for regulatory T cell entry into the tissues and differentiation into tissue-resident cells.
Professor Róisín Owens (Department of Chemical Engineering and Biotechnology/Newnham College) for conformal organic devices for electronic brain-gut readout and characterisation.
Professor Emma Rawlins (Department of Physiology, Development and Neuroscience/Gurdon Institute) for reprogramming lung epithelial cell lineages for regeneration.
Dr Marta Zlatic (Department of Zoology/Trinity College) for discovering the circuit and molecular basis of inter-strain and inter-species differences in learning
“These ERC grants are our commitment to making Europe the world’s hub for excellent research,” said Ekaterina Zaharieva, European Commissioner for Startups, Research, and Innovation. “By supporting projects that have the potential to redefine whole fields, we are not just investing in science but in the future prosperity and resilience of our continent. In the next competition rounds, scientists moving to Europe will receive even greater support in setting up their labs and research teams here. This is part of our “Choose Europe for Science” initiative, designed to attract and retain the world’s top scientists.”
“Much of this pioneering research will contribute to solving some of the most pressing challenges we face - social, economic and environmental,” said Professor Maria Leptin, President of the European Research Council. “Yet again, many scientists - around 260 - with ground-breaking ideas were rated as excellent, but remained unfunded due to a lack of funds at the ERC. We hope that more funding will be available in the future to support even more creative researchers in pursuing their scientific curiosity.”
Eleven senior researchers at the University of Cambridge have been awarded Advanced Grants from the European Research Council – the highest number of grants awarded to any institution in this latest funding round.
Two biologists, an engineer, a physicist and a health scientist from ETH Zurich have been awarded ERC Advanced Grants worth around 12 million euros. The researchers are among the first in Switzerland to receive this prestigious EU research funding after a hiatus of several years.
Two biologists, an engineer, a physicist and a health scientist from ETH Zurich have been awarded ERC Advanced Grants worth around 12 million euros. The researchers are among the first in Switzerland to receive this prestigious EU research funding after a hiatus of several years.
In Invisible Rivals, published by Yale University Press on 17 June, Dr Goodman argues that throughout human history we have tried to rid our social groups of free-riders, people who take from others without giving anything back. But instead of eliminating free-riders, human evolution has just made them better at hiding their deception.
Goodman explains that humans have evolved to use language to disguise selfish acts and exploit our cooperative systems. He links this ‘invisible rivalry’ to the
In Invisible Rivals, published by Yale University Press on 17 June, Dr Goodman argues that throughout human history we have tried to rid our social groups of free-riders, people who take from others without giving anything back. But instead of eliminating free-riders, human evolution has just made them better at hiding their deception.
Goodman explains that humans have evolved to use language to disguise selfish acts and exploit our cooperative systems. He links this ‘invisible rivalry’ to the collapse of trust and consequent success of political strongmen today.
Goodman says: “We see this happening today, as evidenced by the rise of the Julius Caesar of our time—Donald Trump— but it is a situation that evolution has predicted since the origins of life and later, language, and which will only change form again even if the current crises are overcome.”
Goodman argues that over the course of human evolution “When we rid ourselves of ancient, dominant alphas, we traded overt selfishness for something perhaps even darker: the ability to move through society while planning and coordinating.”
“As much as we evolved to use language effectively to work together, to overthrow those brutish and nasty dominants that pervaded ancient society, we also (and do) use language to create opportunities that benefit us … We use language to keep our plans invisible. Humans, more than other known organisms, can cooperate until we imagine a way to compete, exploit, or coerce, and almost always rely on language to do so.”
Goodman, an expert on human social evolution at the University of Cambridge, identifies free-riding behaviour in everything from benefits cheating and tax evasion, to countries dodging action on climate change, and the actions of business leaders and politicians.
Goodman warns that “We can’t stop people free-riding, it’s part of our nature, the incurable syndrome… Free riders are among us at every level of society and pretending otherwise can make our own goals unrealistic, and worse, appear hopeless. But if we accept that we all have this ancient flaw, this ability to deceive ourselves and others, we can design policies around that and change our societies for the better.”
Lessons from our ancestors
Goodman points out that humans evolved in small groups meaning that over many generations we managed to design social norms to govern the distribution of food, water and other vital resources.
“People vied for power but these social norms helped to maintain a trend toward equality, balancing out our more selfish dispositions. Nevertheless, the free-rider problem persisted and using language we got better at hiding our cheating.”
One academic camp has argued that ancient humans used language to work together to overthrow and eject “brutish dominants”. The opposing view claims that this never happened and that humans are inherently selfish and tribal. Goodman rejects both extremes.
“If we accept the view that humans are fundamentally cooperative, we risk trusting blindly. If we believe everyone is selfish, we won’t trust anyone. We need to be realistic about human nature. We’re a bit of both so we need to learn how to place our trust discerningly.”
Goodman points out that our distant ancestors benefitted from risk-pooling systems, whereby all group members contributed labour and shared resources, but this only worked because it is difficult to hide tangible assets such as tools and food. While some hunter-gatherer societies continue to rely on these systems, they are ineffective in most modern societies in our globalized economy.
“Today most of us rely largely on intangible assets for monetary exchange so people can easily hide resources, misrepresent their means and invalidate the effectiveness of social norms around risk pooling,” Goodman says.
“We are flawed animals capable of deception, cruelty, and selfishness. The truth is hard to live with but confronting it through honest reflection about our evolutionary past gives us the tools to teach ourselves and others about how we can improve the future.”
Taking action: self-knowledge, education and policy
Goodman, who teaches students at Cambridge about the evolution of cooperation, argues that we reward liars from a young age and that this reinforces bad behaviour into adulthood.
“People tell children that cheaters don’t prosper, but in fact cheats who don’t get caught can do very well for themselves.”
“Evolutionarily speaking, appearing trustworthy but being selfish can be more beneficial to the individual. We need to recognise that and make a moral choice about whether we try to use people or to work with them.”
At the same time, Goodman thinks we need to arm ourselves intellectually with the power to tell who is credible and who is not. “Our most important tool for doing this is education,” he says. “We must teach people to think ethically for themselves, and to give them the tools to do so.”
But Goodman cautions that even the tools we use to expose exploiters are open to exploitation: “Think about how people across the political sphere accuse others of virtue signalling or abusing a well-intentioned political movement for their own gain.”
Goodman believes that exposing free-riders is more beneficial than punishment. “Loss of social capital through reputation is an important motivator for anyone,” he argues, suggesting that journalistic work exposing exploitation can be as effective at driving behaviour change as criminal punishment.
“The dilemma each of us faces now is whether to confront invisible rivalry or to let exploiters undermine society until democracy in the free world unravels—and the freedom of dissent is gone.”
Dr Jonathan R Goodman is a research associate at Cambridge Public Health and a social scientist at the Wellcome Sanger Institute.
To save democracy and solve the world's biggest challenges, we need to get better at spotting and exposing people who exploit human cooperation for personal gain, argues Cambridge social scientist Dr Jonathan Goodman.
If we accept that we all have this ancient flaw, we can change our societies for the better
ETH graduates are primed to excel – and they bring far more to the table than just technical expertise. This makes them popular among employers. But can ETH maintain its winning formula for education in the future?
ETH graduates are primed to excel – and they bring far more to the table than just technical expertise. This makes them popular among employers. But can ETH maintain its winning formula for education in the future?
Knowledge grows through dialogue: we learn by explaining and understand by listening. ETH helps facilitate this process – in continuing education, in vocational training and at the science-policy interface.
Knowledge grows through dialogue: we learn by explaining and understand by listening. ETH helps facilitate this process – in continuing education, in vocational training and at the science-policy interface.
Students at ETH Zurich are teaming up with engineers from industrial companies to help expedite innovation. This unique approach to teaching and collaboration has been hailed by both sides as a great success.
Students at ETH Zurich are teaming up with engineers from industrial companies to help expedite innovation. This unique approach to teaching and collaboration has been hailed by both sides as a great success.
Alumna Petra Ehmann studied Mechanical Engineering at ETH Zurich. Today, she works for the Ringier media group as Chief Innovation and AI Officer. Her technical understanding helps her to act as a bridge builder between engineers, senior management and the market.
Alumna Petra Ehmann studied Mechanical Engineering at ETH Zurich. Today, she works for the Ringier media group as Chief Innovation and AI Officer. Her technical understanding helps her to act as a bridge builder between engineers, senior management and the market.
Alumna Michelle Ammann studied Comparative and International Studies at ETH Zurich and now works at the UN in New York. The innovative ETH mindset and her data analysis skills are highly valued by the international organisation.
Alumna Michelle Ammann studied Comparative and International Studies at ETH Zurich and now works at the UN in New York. The innovative ETH mindset and her data analysis skills are highly valued by the international organisation.
ETH alumni Moritz Lechner and Felix Mayer have founded Sensirion, an internationally successful company. Now the two physicists look back. What has remained of their education at one of the best universities in the world?
ETH alumni Moritz Lechner and Felix Mayer have founded Sensirion, an internationally successful company. Now the two physicists look back. What has remained of their education at one of the best universities in the world?
Three women spin-off founders from ETH Zurich have developed a portable measuring device that analyses soil quality and enables customised fertilisation. The device makes farming more sustainable and efficient while reducing its environmental impact.
Three women spin-off founders from ETH Zurich have developed a portable measuring device that analyses soil quality and enables customised fertilisation. The device makes farming more sustainable and efficient while reducing its environmental impact.
In 2019, ETH Zurich launched one of the world’s first Master’s degrees in Quantum Engineering. Since then, interest in the programme has soared – and its first graduates are already making their mark in industry.
In 2019, ETH Zurich launched one of the world’s first Master’s degrees in Quantum Engineering. Since then, interest in the programme has soared – and its first graduates are already making their mark in industry.
The Graphische Sammlung ETH Zürich boasts a collection of some 160,000 prints and drawings by artists such as Dürer, Rembrandt and Warhol. Its new exhibition, NEOGEO, marks an experimental turn, featuring the work of three contemporary female artists.
The Graphische Sammlung ETH Zürich boasts a collection of some 160,000 prints and drawings by artists such as Dürer, Rembrandt and Warhol. Its new exhibition, NEOGEO, marks an experimental turn, featuring the work of three contemporary female artists.
In 1989, New York City opened a new jail. But not on dry land. The city leased a barge, then called the “Bibby Resolution,” which had been topped with five stories of containers made into housing, and anchored it in the East River. For five years, the vessel lodged inmates.A floating detention center is a curiosity. But then, the entire history of this barge is curious. Built in 1979 in Sweden, it housed British troops during the Falkland Islands war with Argentina, became worker housing for Vol
In 1989, New York City opened a new jail. But not on dry land. The city leased a barge, then called the “Bibby Resolution,” which had been topped with five stories of containers made into housing, and anchored it in the East River. For five years, the vessel lodged inmates.
A floating detention center is a curiosity. But then, the entire history of this barge is curious. Built in 1979 in Sweden, it housed British troops during the Falkland Islands war with Argentina, became worker housing for Volkswagen employees in West Germany, got sent to New York, also became a detention center off the coast of England, then finally was deployed as oil worker housing off the coast of Nigeria. The barge has had nine names, several owners, and flown the flags of five countries.
In this one vessel, then, we can see many currents: globalization, the transience of economic activity, and the hazy world of transactions many analysts and observers call “the offshore,” the lightly regulated sphere of economic activity that encourages short-term actions.
“The offshore presents a quick and potentially cheap solution to a crisis,” says MIT lecturer Ian Kumekawa. “It is not a durable solution. The story of the barge is the story of it being used as a quick fix in all sorts of crises. Then these expediences become the norm, and people get used to them and have an expectation that this is the way the world works.”
Now Kumekawa, a historian who started teaching as a lecturer at MIT earlier this year, explores the ship’s entire history in “Empty Vessel: The Global Economy in One Barge,” just published by Knopf and John Murray. In it, he traces the barge’s trajectory and the many economic and geopolitical changes that helped create the ship’s distinctive deployments around the world.
“The book is about a barge, but it’s also about the developing, emerging offshore world, where you see these layers of globalization, financialization, privatization, and the dissolution of territoriality and orders,” Kumekawa says. “The barge is a vehicle through which I can tell the story of those layers together.”
“Never meant to be permanent”
Kumekawa first found out about the vesselseveral years ago; New York City obtained another floating detention center in the 1990s, which prompted Kumekawa to start looking into the past of the older jail ship, the former “Bibby Resolution,” from the 1990s. The more he found out about its distinctive past, the more curious he became.
“You start pulling on a thread, and you realize you can keep pulling,” Kumekawa says.
The barge Kumekawa follows in the book was built in Sweden in 1979 as the “Balder Scapa.” Even then, commerce was plenty globalized: The vessel was commissioned by a Norwegian shell company, with negotiations run by an expatriate Swedish shipping agent whose firm was registered in Panama and used a Miami bank.
The barge was built at an inflection point following the economic slowdown and oil shocks of the 1970s. Manufacturing was on the verge of declining in both Western Europe and the U.S.; about half as many people now work in manufacturing in those regions, compared to 1960. Companies were looking to find cheaper global locations for production, reinforcing the sense that economic activity was now less durable in any given place.
The barge became part of this transience. The five-story accommodation block was added in the early 1980s; in 1983 it was re-registered in the UK and sent to the Falkland Islands as a troop accommodation named the “COASTEL 3.” Then it was re-registered in the Bahamas and sent to Emden, West Germany, as housing for Volkswagen workers. The vessel then served its stints as inmate housing — first in New York, then off the coast of England from 1997 to 2005. By 2010, it had been re-re-re-registered, in St. Vincent and Grenadines, and was housing oil workers off the coast of Nigeria.
“Globalization is more about flow than about stocks, and the barge is a great example of that,” Kumekawa says. “It’s always on the move, and never meant to be a permanent container. It’s understood people are going to be passing through.”
As Kumekawa explores in the book, this sense of social dislocation overlapped with the shrinking of state capacity, as many states increasingly encouraged companies to pursue globalized production and lightly regulated financial activities in numerous jurisdictions, in the hope it would enhance growth. And it has, albeit with unresolved questions about who the benefits accrue to, the social dislocation of workers, and more.
“In a certain sense it’s not an erosion of state power at all,” Kumekawa says. “These states are making very active choices to use offshore tools, to circumvent certain roadblocks.” He adds: “What happens in the 1970s and certainly in the 1980s is that the offshore comes into its own as an entity, and didn’t exist in the same way even in the 1950s and 1960s. There’s a money interest in that, and there’s a political interest as well.”
Abstract forces, real materials and people
Kumekawa is a scholar with a strong interest in economic history; his previous book, “The First Serious Optimist: A.C. Pigou and the Birth of Welfare Economics,” was published in 2017. This coming fall, Kumekawa will be team-teaching a class on the relationship between economics and history, along with MIT economists Abhijit Banerjee and Jacob Moscona.
Working on “Empty Vessel” also necessitated that Kumekawa use a variety of research techniques, from archival work to journalistic interviews with people who knew the vessel well.
“I had a wonderful set of conversations with the man who was the last bargemaster,” Kumekawa says. “He was the person in effect steering the vessel for many years. He was so aware of all of the forces at play — the market for oil, the prices of accommodations, the regulations, the fact no one had reinforced the frame.”
“Empty Vessel” has already received critical acclaim. Reviewing it in The New York Times, Jennifer Szalai writes that this “elegant and enlightening book is an impressive feat.”
For his part, Kumekawa also took inspiration from a variety of writings about ships, voyages, commerce, and exploration, recognizing that these vessels contain stories and vignettes that illuminate the wider world.
“Ships work very well as devices connecting the global and the local,” he says. Using the barge as the organizing principle of his book, Kumekawa adds, “makes a whole bunch of abstract processes very concrete. The offshore itself is an abstraction, but it’s also entirely dependent on physical infrastructure and physical places. My hope for the book is it reinforces the material dimension of these abstract global forces.”
In recent years, some grass lawns around the country have grown a little taller in springtime thanks to No Mow May, a movement originally launched by U.K. nonprofit Plantlife in 2019 designed to raise awareness about the ecological impacts of the traditional, resource-intensive, manicured grass lawn. No Mow May encourages people to skip spring mowing to allow for grass to grow tall and provide food and shelter for beneficial creatures including bees, beetles, and other pollinators.This year, MIT
In recent years, some grass lawns around the country have grown a little taller in springtime thanks to No Mow May, a movement originally launched by U.K. nonprofit Plantlife in 2019 designed to raise awareness about the ecological impacts of the traditional, resource-intensive, manicured grass lawn. No Mow May encourages people to skip spring mowing to allow for grass to grow tall and provide food and shelter for beneficial creatures including bees, beetles, and other pollinators.
This year, MIT took part in the practice for the first time, with portions of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard forgoing mowing from May 1 through June 6 to make space for local pollinators, decrease water use, and encourage new thinking about the traditional lawn. MIT’s first No Mow May was the result of championing by the Graduate Student Council Sustainability Subcommittee (GSC Sustain) and made possible by the Office of the Vice Provost for Campus Space Management and Planning.
A student idea sprouts
Despite being a dense urban campus, MIT has no shortage of green spaces — from pocket gardens and community-managed vegetable plots to thousands of shade trees — and interest in these spaces continues to grow. In recent years, student-led initiatives supported by Institute leadership and operational staff have transformed portions of campus by increasing the number of native pollinator plants and expanding community gardens, like the Hive Garden. With No Mow May, these efforts stepped out of the garden and into MIT’s many grassy open spaces.
“The idea behind it was to raise awareness for more sustainable and earth-friendly lawn practices,” explains Gianmarco Terrones, GSC Sustain member. Those practices include reducing the burden of mowing, limiting use of fertilizers, and providing shelter and food for pollinators. “The insects that live in these spaces are incredibly important in terms of pollination, but they’re also part of the food chain for a lot of animals,” says Terrones.
Research has shown that holding off on mowing in spring, even in small swaths of green space, can have an impact. The early months of spring have the lowest number of flowers in regions like New England, and providing a resource and refuge — even for a short duration — can support fragile pollinators like bees. Additionally, No Mow May aims to help people rethink their yards and practices, which are not always beneficial for local ecosystems.
Signage at each No Mow site on campus highlighted information on local pollinators, the impact of the project, and questions for visitors to ask themselves. “Having an active sign there to tell people, ‘look around. How many butterflies do you see after six weeks of not mowing? Do you see more? Do you see more bees?’ can cause subtle shifts in people’s awareness of ecosystems,” says GSC Sustain member Mingrou Xie. A mowed barrier around each project also helped visitors know that areas of tall grass at No Mow sites are intentional.
Campus partners bring sustainable practices to life
To make MIT’s No Mow May possible, GSC Sustain members worked with the Office of the Vice Provost and the Open Space Working Group, co-chaired by Vice Provost for Campus Space Management and Planning Brent Ryan and Director of Sustainability Julie Newman. The Working Group, which also includes staff from Open Space Programming, Campus Planning, and faculty in the School of Architecture and Planning, helped to identify potential No Mow locations and develop strategies for educational signage and any needed maintenance. “Massachusetts is a biodiverse state, and No Mow May provides an exciting opportunity for MIT to support that biodiversity on its own campus,” says Ryan.
Students were eager for space on campus with high visibility, and the chosen locations of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard fit the bill. “We wanted to set an example and empower the community to feel like they can make a positive change to an environment they spend so much time in,” says Xie.
For GSC Sustain, that positive change also takes the form of the Native Plant Project, which they launched in 2022 to increase the number of Massachusetts-native pollinator plants on campus — plants like swamp milkweed, zigzag goldenrod, big leaf aster, and red columbine, with which native pollinators have co-evolved. Partnering with the Open Space Working Group, GSC Sustain is currently focused on two locations for new native plant gardens — the President’s Garden and the terrace gardens at the E37 Graduate Residence. “Our short-term goal is to increase the number of native [plants] on campus, but long term we want to foster a community of students and staff interested in supporting sustainable urban gardening,” says Xie.
Campus as a test bed continues to grow
After just a few weeks of growing, the campus No Mow May locations sprouted buttercups, mouse ear chickweed, and small tree saplings, highlighting the diversity waiting dormant in the average lawn. Terrones also notes other discoveries: “It’s been exciting to see how much the grass has sprung up these last few weeks. I thought the grass would all grow at the same rate, but as May has gone on the variations in grass height have become more apparent, leading to non-uniform lawns with a clearly unmanicured feel,” he says. “We hope that members of MIT noticed how these lawns have evolved over the span of a few weeks and are inspired to implement more earth-friendly lawn practices in their own homes/spaces.”
No Mow May and the Native Plant Project fit into MIT’s overall focus on creating resilient ecosystems that support and protect the MIT community and the beneficial critters that call it home. MIT Grounds Services has long included native plants in the mix of what is grown on campus and native pollinator gardens, like the Hive Garden, have been developed and cared for through partnerships with students and Grounds Services in recent years. Grounds, along with consultants that design and install our campus landscape projects, strive to select plants that assist us with meeting sustainability goals, like helping with stormwater runoff and cooling. No Mow May can provide one more data point for the iterative process of choosing the best plants and practices for a unique microclimate like the MIT campus.
“We are always looking for new ways to use our campus as a test bed for sustainability,” says Director of Sustainability Julie Newman. “Community-led projects like No Mow May help us to learn more about our campus and share those lessons with the larger community.”
The Office of the Vice Provost, the Open Space Working Group, and GSC Sustain will plan to reconnect in the fall for a formal debrief of the project and its success. Given the positive community feedback, future possibilities of expanding or extending No Mow May will be discussed.
Nanostructures are a stunning array of intricate patterns that are imperceptible to the human eye, yet they help power modern life. They are the building blocks of microchip transistors, etched onto grating substrates of space-based X-ray telescopes, and drive innovations in medicine, sustainability, and quantum computing.Since the 1970s, Henry “Hank” Smith, MIT professor emeritus of electrical engineering, has been a leading force in this field. He pioneered the use of proximity X-ray lithograp
Nanostructures are a stunning array of intricate patterns that are imperceptible to the human eye, yet they help power modern life. They are the building blocks of microchip transistors, etched onto grating substrates of space-based X-ray telescopes, and drive innovations in medicine, sustainability, and quantum computing.
Since the 1970s, Henry “Hank” Smith, MIT professor emeritus of electrical engineering, has been a leading force in this field. He pioneered the use of proximity X-ray lithography, proving that X-rays’ short optical wavelength could produce high-resolution patterns at the nanometer scale. Smith also made significant advancements in phase-shifting masks (PSMs), a technique that disrupts light waves to enhance contrast. His design of attenuated PSMs, which he co-created with graduate students Mark Schattenburg PhD ʼ84 and Erik H. Anderson ʼ81, SM ʼ84, PhD ʼ88, is still used today in the semiconductor industry.
In recognition of these contributions, as well as highly influential achievements in liquid-immersion lithography, achromatic-interference lithography, and zone-plate array lithography, Smith recently received the 2025 SPIE Frits Zernike Award for Microlithography. Given by the Society of Photo-Optical Instrumentation Engineers (SPIE), the accolade recognizes scientists for their outstanding accomplishments in microlithographic technology.
“The Zernike Award is an impressive honor that aptly recognizes Hank’s pioneering contributions,” says Karl Berggren, MIT’s Joseph F. and Nancy P. Keithley Professor in Electrical Engineering and faculty head of electrical engineering. “Whether it was in the classroom, at a research conference, or in the lab, Hank approached his work with a high level of scientific rigor that helped make him decades ahead of industry practices.”
Now 88 years old, Smith has garnered many other honors. He was also awarded the SPIE BACUS Prize, named a member of the National Academy of Engineering, and is a fellow of the American Academy of Arts and Sciences, IEEE, the National Academy of Inventors, and the International Society for Nanomanufacturing.
Jump-starting the nano frontier
From an early age, Smith was fascinated by the world around him. He took apart clocks to see how they worked, explored the outdoors, and even observed the movement of water. After graduating from high school in New Jersey, Smith majored in physics at College of the Holy Cross. From there, he pursued his doctorate at Boston College and served three years as an officer in the U.S. Air Force.
It was his job at MIT Lincoln Laboratory that ultimately changed Smith’s career trajectory. There, he met visitors from MIT and Harvard University who shared their big ideas for electronic and surface acoustic wave devices but were stymied by the physical limitations of fabrication. Yet, few were inclined to tackle this challenge.
“The job of making things was usually brushed off the table with, ‘oh well, we’ll get some technicians to do that,’” Smith said in his oral history for the Center for Nanotechnology in Society. “And the intellectual content of fabrication technology was not appreciated by people who had been ‘traditionally educated,’ I guess.”
More interested in solving problems than maintaining academic rank, Smith set out to understand the science of fabrication. His breakthrough in X-ray lithography signaled to the world the potential and possibilities of working on the nanometer scale, says Schattenburg, who is a senior research scientist at MIT Kavli Institute for Astrophysics and Space Research.
“His early work proved to people at MIT and researchers across the country that nanofabrication had some merit,” Schattenburg says. “By showing what was possible, Hank really jump-started the nano frontier.”
Cracking open lithography’s black box
By 1980, Smith left Lincoln Lab for MIT’s main campus and continued to push forward new ideas in his NanoStructures Laboratory (NSL), formerly the Submicron Structures Laboratory. NSL served as both a research lab and a service shop that provided optical gratings, which are pieces of glass engraved with sub-micron periodic patterns, to the MIT community and outside scientists. It was a busy time for the lab; NSL attracted graduate students and international visitors. Still, Smith and his staff ensured that anyone visiting NSL would also receive a primer on nanotechnology.
“Hank never wanted anything we produced to be treated as a black box,” says Mark Mondol, MIT.nano e-beam lithography domain expert who spent 23 years working with Smith in NSL. “Hank was always very keen on people understanding our work and how it happens, and he was the perfect person to explain it because he talked in very clear and basic terms.”
The physical NSL space in MIT Building 39 shuttered in 2023, a decade after Smith became an emeritus faculty member. NSL’s knowledgeable staff and unique capabilities transferred to MIT.nano, which now serves as MIT’s central hub for supporting nanoscience and nanotechnology advancements. Unstoppable, Smith continues to contribute his wisdom to the ever-expanding nano community by giving talks at the NSL Community Meetings at MIT.nano focused on lithography, nanofabrication, and their future.
Smith’s career is far from complete. Through his startup LumArray, Smith continues to push the boundaries of knowledge. He recently devised a maskless lithography method, known as X-ray Maskless Lithography (XML), that has the potential to lower manufacturing costs of microchips and thwart the sale of counterfeit microchips.
Dimitri Antoniadis, MIT professor emeritus of electrical engineering and computer science, is Smith’s longtime collaborator and friend. According to him, Smith’s commitment to research is practically unheard-of.
“Once professors reach emeritus status, we usually inspire and supervise research,” Antoniadis says. “It’s very rare for retired professors to do all the work themselves, but he loves it.”
Enduring influence
Smith’s legacy extends far beyond the groundbreaking tools and techniques he pioneered, say his friends, colleagues, and former students. His relentless curiosity and commitment to his graduate students helped propel his field forward.
He earned a reputation for sitting in the front row at research conferences, ready to ask the first question. Fellow researchers sometimes dreaded seeing him there.
“Hank kept us honest,” Berggren says. “Scientists and engineers knew that they couldn’t make a claim that was a little too strong, or use data that didn’t support the hypothesis, because Hank would hold them accountable.”
Smith never saw himself as playing the good cop or bad cop — he was simply a curious learner unafraid to look foolish.
“There are famous people, Nobel Prize winners, that will sit through research presentations and not have a clue as to what’s going on,” Smith says. “That is an utter waste of time. If I don’t understand something, I’m going to ask a question.”
As an advisor, Smith held his graduate students to high standards. If they came unprepared or lacked understanding of their research, he would challenge them with tough, unrelenting questions. Yet, he was also their biggest advocate, helping students such as Lisa Su SB/SM ʼ91, PhD ʼ94, who is now the chair and chief executive officer of AMD, and Dario Gil PhD ʼ03, who is now the chair of the National Science Board and senior vice president and director of research at IBM, succeed in the lab and beyond.
Research Specialist James Daley has spent nearly three decades at MIT, most of them working with Smith. In that time, he has seen hundreds of advisees graduate and return to offer their thanks. “Hank’s former students are all over the world,” Daley says. “Many are now professors mentoring their own graduate students and bringing with them some of Hank’s style. They are his greatest legacy.”
On May 6, MIT AgeLab’s Advanced Vehicle Technology (AVT) Consortium, part of the MIT Center for Transportation and Logistics, celebrated 10 years of its global academic-industry collaboration. AVT was founded with the aim of developing new data that contribute to automotive manufacturers, suppliers, and insurers’ real-world understanding of how drivers use and respond to increasingly sophisticated vehicle technologies, such as assistive and automated driving, while accelerating the applied insig
On May 6, MIT AgeLab’s Advanced Vehicle Technology (AVT) Consortium, part of the MIT Center for Transportation and Logistics, celebrated 10 years of its global academic-industry collaboration. AVT was founded with the aim of developing new data that contribute to automotive manufacturers, suppliers, and insurers’ real-world understanding of how drivers use and respond to increasingly sophisticated vehicle technologies, such as assistive and automated driving, while accelerating the applied insight needed to advance design and development. The celebration event brought together stakeholders from across the industry for a set of keynote addresses and panel discussions on critical topics significant to the industry and its future, including artificial intelligence, automotive technology, collision repair, consumer behavior, sustainability, vehicle safety policy, and global competitiveness.
Bryan Reimer, founder and co-director of the AVT Consortium, opened the event by remarking that over the decade AVT has collected hundreds of terabytes of data, presented and discussed research with its over 25 member organizations, supported members’ strategic and policy initiatives, published select outcomes, and built AVT into a global influencer with tremendous impact in the automotive industry. He noted that current opportunities and challenges for the industry include distracted driving, a lack of consumer trust and concerns around transparency in assistive and automated driving features, and high consumer expectations for vehicle technology, safety, and affordability. How will industry respond? Major players in attendance weighed in.
In a powerful exchange on vehicle safety regulation, John Bozzella, president and CEO of the Alliance for Automotive Innovation, and Mark Rosekind, former chief safety innovation officer of Zoox, former administrator of the National Highway Traffic Safety Administration, and former member of the National Transportation Safety Board, challenged industry and government to adopt a more strategic, data-driven, and collaborative approach to safety. They asserted that regulation must evolve alongside innovation, not lag behind it by decades. Appealing to the automakers in attendance, Bozzella cited the success of voluntary commitments on automatic emergency braking as a model for future progress. “That’s a way to do something important and impactful ahead of regulation.” They advocated for shared data platforms, anonymous reporting, and a common regulatory vision that sets safety baselines while allowing room for experimentation. The 40,000 annual road fatalities demand urgency — what’s needed is a move away from tactical fixes and toward a systemic safety strategy. “Safety delayed is safety denied,” Rosekind stated. “Tell me how you’re going to improve safety. Let’s be explicit.”
Drawing inspiration from aviation’s exemplary safety record, Kathy Abbott, chief scientific and technical advisor for the Federal Aviation Administration, pointed to a culture of rigorous regulation, continuous improvement, and cross-sectoral data sharing. Aviation’s model, built on highly trained personnel and strict predictability standards, contrasts sharply with the fragmented approach in the automotive industry. The keynote emphasized that a foundation of safety culture — one that recognizes that technological ability alone isn’t justification for deployment — must guide the auto industry forward. Just as aviation doesn’t equate absence of failure with success, vehicle safety must be measured holistically and proactively.
With assistive and automated driving top of mind in the industry, Pete Bigelow of Automotive News offered a pragmatic diagnosis. With companies like Ford and Volkswagen stepping back from full autonomy projects like Argo AI, the industry is now focused on Level 2 and 3 technologies, which refer to assisted and automated driving, respectively. Tesla, GM, and Mercedes are experimenting with subscription models for driver assistance systems, yet consumer confusion remains high. JD Power reports that many drivers do not grasp the differences between L2 and L2+, or whether these technologies offer safety or convenience features. Safety benefits have yet to manifest in reduced traffic deaths, which have risen by 20 percent since 2020. The recurring challenge: L3 systems demand that human drivers take over during technical difficulties, despite driver disengagement being their primary benefit, potentially worsening outcomes. Bigelow cited a quote from Bryan Reimer as one of the best he’s received in his career: “Level 3 systems are an engineer’s dream and a plaintiff attorney’s next yacht,” highlighting the legal and design complexity of systems that demand handoffs between machine and human.
In terms of the impact of AI on the automotive industry, Mauricio Muñoz, senior research engineer at AI Sweden, underscored that despite AI’s transformative potential, the automotive industry cannot rely on general AI megatrends to solve domain-specific challenges. While landmark achievements like AlphaFold demonstrate AI’s prowess, automotive applications require domain expertise, data sovereignty, and targeted collaboration. Energy constraints, data firewalls, and the high costs of AI infrastructure all pose limitations, making it critical that companies fund purpose-driven research that can reduce costs and improve implementation fidelity. Muñoz warned that while excitement abounds — with some predicting artificial superintelligence by 2028 — real progress demands organizational alignment and a deep understanding of the automotive context, not just computational power.
Turning the focus to consumers, a collision repair panel drawing Richard Billyeald from Thatcham Research, Hami Ebrahimi from Caliber Collision, and Mike Nelson from Nelson Law explored the unintended consequences of vehicle technology advances: spiraling repair costs, labor shortages, and a lack of repairability standards. Panelists warned that even minor repairs for advanced vehicles now require costly and complex sensor recalibrations — compounded by inconsistent manufacturer guidance and no clear consumer alerts when systems are out of calibration. The panel called for greater standardization, consumer education, and repair-friendly design. As insurance premiums climb and more people forgo insurance claims, the lack of coordination between automakers, regulators, and service providers threatens consumer safety and undermines trust. The group warned that until Level 2 systems function reliably and affordably, moving toward Level 3 autonomy is premature and risky.
While the repair panel emphasized today’s urgent challenges, other speakers looked to the future. Honda’s Ryan Harty, for example, highlighted the company’s aggressive push toward sustainability and safety. Honda aims for zero environmental impact and zero traffic fatalities, with plans to be 100 percent electric by 2040 and to lead in energy storage and clean power integration. The company has developed tools to coach young drivers and is investing in charging infrastructure, grid-aware battery usage, and green hydrogen storage. “What consumers buy in the market dictates what the manufacturers make,” Harty noted, underscoring the importance of aligning product strategy with user demand and environmental responsibility. He stressed that manufacturers can only decarbonize as fast as the industry allows, and emphasized the need to shift from cost-based to life-cycle-based product strategies.
Finally, a panel involving Laura Chace of ITS America, Jon Demerly of Qualcomm, Brad Stertz of Audi/VW Group, and Anant Thaker of Aptiv covered the near-, mid-, and long-term future of vehicle technology. Panelists emphasized that consumer expectations, infrastructure investment, and regulatory modernization must evolve together. Despite record bicycle fatality rates and persistent distracted driving, features like school bus detection and stop sign alerts remain underutilized due to skepticism and cost. Panelists stressed that we must design systems for proactive safety rather than reactive response. The slow integration of digital infrastructure — sensors, edge computing, data analytics — stems not only from technical hurdles, but procurement and policy challenges as well.
Reimer concluded the event by urging industry leaders to re-center the consumer in all conversations — from affordability to maintenance and repair. With the rising costs of ownership, growing gaps in trust in technology, and misalignment between innovation and consumer value, the future of mobility depends on rebuilding trust and reshaping industry economics. He called for global collaboration, greater standardization, and transparent innovation that consumers can understand and afford. He highlighted that global competitiveness and public safety both hang in the balance. As Reimer noted, “success will come through partnerships” — between industry, academia, and government — that work toward shared investment, cultural change, and a collective willingness to prioritize the public good.
Rebecca Lemov.Veasey Conway/Harvard Staff Photographer
Nation & World
Brainwashing? Like ‘The Manchurian Candidate’?
More than vestige of Cold War, mind-control techniques remain with us in social media, cults, AI, elsewhere, new book argues
Liz Mineo
Harvard Staff Writer
June 16, 2025
7 min read
Brainwashing is often viewed as a Cold War relic — think ’60s films like “The Manchurian
More than vestige of Cold War, mind-control techniques remain with us in social media, cults, AI, elsewhere, new book argues
Liz Mineo
Harvard Staff Writer
7 min read
Brainwashing is often viewed as a Cold War relic — think ’60s films like “The Manchurian Candidate” and “The IPCRESS File.”
But Rebecca Lemov, professor of the history of science, argues in her recently released book “The Instability of Truth: Brainwashing, Mind Control, and Hyper-Persuasion” that it still persists. Elements of coercion and persuasion, components of mind and behavior control are used in cults, social media, AI, and even crypto culture, she said.
In this edited interview, Lemov talks about the history of brainwashing, why it endures, and how it works.
What is the common thread among brainwashing, mind control, and hyper-persuasion?
They’re all related. Brainwashing gets the most attention because it is the most dramatic and grabs headlines.
The concept attracted me 20 years ago when I set out to do my dissertation research. Having studied behavioral engineering, brainwashing seemed to me like the most extreme form of engineering someone to do something or think something different than what they might otherwise do.
Mind control is a synonym, but it has more of an emphasis on technology. I invented the word hyper-persuasion to describe a highly targeted set of techniques that can exist in our modern media environment. The common thread among them is one of coercion combined with persuasion.
You write that Korean War POWs in the early 1950s brought the concept of brainwashing home to the U.S. Did brainwashing exist before that?
Before the Korean War, there were incidents that certainly we could call brainwashing, going back to the ancient Greeks and certain cultic mysteries and transformations that were enacted in circumstances of coercion mixed with persuasion.
You could jump forward to the 1930s, to the “show trials” in Moscow where political enemies would be confessing to terrible crimes, or the 1940s, when Cardinal Mindszenty, a Hungarian war hero, who, after having been arrested and imprisoned by the communist police, confessed to crimes against the Hungarian people and the church. He didn’t seem like himself, and it seemed that something had been done to him.
Mindszenty later described that he had been subjected to sleep deprivation, had potentially been drugged, and he said this famous line, which came to represent brainwashing, “Without knowing what had happened to me, I had become a different person.”
With the Korean War, U.S. Air Force POWs came forward with confessions that they had dropped secret germ warfare over China and Korea, and they looked like Mindszenty had looked, in a sort of some hypnotic trance. All of this is depicted in the 1962 movie “The Manchurian Candidate.”
The crisis reached its peak when 21 U.S. POWs who had been held behind enemy lines declared that they would prefer not to return to the United States but rather stay in China. The then-CIA Director Allen Dulles declared that the soldiers had been converted against their will.
It was around this time that MKUltra, a secret CIA mind-control and chemical interrogation research program, was funded.
The case of heiress Patricia Hearst, who was kidnapped and brainwashed by leftist radicals in the 1970s, renewed public interest in brainwashing. Was it in fact brainwashing?
In the trial of Patty Hearst, which was called the trial of the century in 1976, four major experts who testified on her behalf said that what had been done to her was also what had been done to the POWs in the Korean War.
“That’s the paradox of brainwashing. It hides itself in plain sight.”
People had a hard time believing she had been coerced into becoming a leftist radical because she was captured on camera robbing a bank with the guerrilla group that had abducted her, but she said, “I accommodated my thoughts to coincide with theirs.” That’s the paradox of brainwashing. It hides itself in plain sight.
Some scholars argue that brainwashing doesn’t really exist, that it’s merely a hysterical response. In his book “The Captive Mind,” Polish poet Czeslaw Milosz writes that needing to accommodate your thoughts to coincide with a certain regime is to brainwash oneself. He describes how he ultimately couldn’t do it to himself, and that’s why he ended up leaving communist Poland.
In a sense, Patty Hearst, who was 19 when she was abducted and was subjected to physical abuse and indoctrination, couldn’t just pretend to be a soldier. She had to be one. And that’s brainwashing.
You argue in your book that social media, crypto, and other new technologies can produce some sort of mind control. How so?
Social media, AI companionship bots, and crypto, the culture of cryptocurrency investment, are digital environments that include a highly targeted form of emotional connectedness that often has a coercive element.
When we’re on social media, we’re constantly being exposed to messages and microenvironments, which resemble the process of brainwashing or mind control.
“When we’re on social media, we’re constantly being exposed to messages and microenvironments, which resemble the process of brainwashing or mind control.”
First, both start with a kind of ungrounding process or successive shocks. If you’re doom scrolling, you’re subjected to successive shocks, and there is a point of disorientation because we can feel overwhelmed by these algorithmically targeted pieces of information that we voluntarily expose ourselves to, but we can’t seem to stop.
Second is milieu control, which is the kind of siloing where you’re only getting controlled messages from certain sources.
That can result in what I call hyper-persuasion, which becomes a third form of brainwashing. What’s concerning is that these new technologies are targeted exactly for you. For example, AI chatbot companions may have your psychological makeup obtained from the internet or from information that you, and all of us, are giving away online.
You’ve been teaching a class on brainwashing for 20 years on and off. Why do you think students are interested in it?
There is a kind of fascination with brainwashing and mind control.
Some also may have some personal experience, like a relative was in a cult or sometimes even a personal relationship that was distressing to them. Sometimes they have questions about coercive control. How would one get into an abusive relationship? Or how do addictions feed into this? There is also a general fascination with cults.
Now students are more and more interested in social media and their use of targeted algorithms, and how the constant stream of trivial choices we all make may have a large effect.
Can anyone be susceptible to brainwashing?
There are studies of people who have been re-educated who describe that their guilt from childhood was capitalized on in the process of maybe being recruited into a cult.
We think that brainwashing has to do with being forced to believe something, or that it works at the level of cognition or ideas, but it works more at the level of emotions. This sort of tapping into the emotional layer is what we often don’t see — the way that they capitalize on unresolved trauma, which is unprocessed, extreme emotion.
“Being intelligent is not a protection against brainwashing.”
Being intelligent is not a protection against brainwashing. We shouldn’t think that only certain people are more susceptible to be brainwashed. You may think that you’re too sophisticated, but because brainwashing happens at the emotional level, there is no protection against it.
What I found helpful is to be aware of the process taking place at the emotional level. We’re getting cues all the time as we interact with social media, or with a group of people who maybe want to recruit us into their groups. It’s helpful to be mindful of the visceral cues and not simply the ideas.
Daniel Polley. Photo by Dylan Goodman
Health
Hope for sufferers of ‘invisible’ tinnitus disorder
Researchers develop way to objectively measure common malady, which may improve diagnosis, help in developing therapies
Alvin Powell
Harvard Staff
June 16, 2025
8 min read
Researchers are gaining new insights into the “invisible” disorder tinnitus, whose phantom ringing, hissing, and other noi
Hope for sufferers of ‘invisible’ tinnitus disorder
Researchers develop way to objectively measure common malady, which may improve diagnosis, help in developing therapies
Alvin Powell
Harvard Staff
8 min read
Researchers are gaining new insights into the “invisible” disorder tinnitus, whose phantom ringing, hissing, and other noises are often linked to hearing damage, but for which physicians have not had an objective measure, until now.
The advance, reported in late April in the journal Science Translational Medicine and funded by the National Institute of Deafness and Other Communication Disorders, has the potential to provide physicians and researchers with a way to gauge tinnitus severity beyond the subjective patient questionnaires in use today. In addition, it also may help develop more effective therapies.
In this edited conversation, Daniel Polley, director of the Eaton-Peabody Laboratories at Harvard-affiliated Massachusetts Eye and Ear and professor of otolaryngology head and neck surgery at Harvard Medical School, discusses research conducted with MEE colleagues that examines involuntary pupil dilation and facial movement in reaction to sound in patients with varying levels of tinnitus.
What is tinnitus? Is it more than just ringing in the ears?
Most cases of tinnitus have one thing in common: the conscious awareness of a sound that doesn’t exist in the physical environment, a phantom sound.
I have tinnitus, and it’s like a 24/7 radio broadcast — a single note — that I usually can put out of mind. But it’s always there if I want to tune into it.
“I have tinnitus, and it’s like a 24/7 radio broadcast — a single note — that I usually can put out of mind. But it’s always there if I want to tune into it.”
It’s exceedingly common, affecting about 12 percent of people. Among those 65 and older, those numbers jump to 25 percent and higher.
For most, this phantom sound is a mild nuisance, but for some it is debilitating. It’s not just an auditory problem, it’s a whole-life problem, a mental-well-being problem. Their tinnitus is not necessarily louder, because when most people match the loudness of their tinnitus to a physical sound, it is actually quite soft.
But what makes people with tinnitus disorder different is that it encroaches on systems that regulate mood and arousal level. A common complaint with severe tinnitus is that it takes longer to go to sleep and you wake up more easily.
Very often people with tinnitus disorder will have a hypersensitivity or aversion to sound. There’s high comorbidity with depression, anxiety, and social withdrawal, a spectrum of neurological and psychiatric issues that come along with it.
So, people who have real trouble, it’s not because it’s louder, but that they just can’t put it out of mind?
They can’t tune it out. Perhaps what makes the neurological signature of more severe tinnitus different than mild tinnitus is that the very systems in the brain responsible for tuning out irrelevant and uninformative things is co-opted in generating the tinnitus. That was the hypothesis that inspired the work that we did. That’s what got us going on this road.
And you believe your work could provide a way to understand this condition better, to study it better?
We need better therapies for tinnitus. That’s the top priority for the field — and for me as well. But taking shots at treatment without first laying the groundwork is unlikely to get us anywhere.
It’s not hard to claim a therapy works when success is measured only by subjective questionnaires and there’s no control for the placebo effect. To be convincing, future studies will need to show improvements in physiological signs of tinnitus distress — changes that are unlikely to come from placebo alone.
This study helps lay that groundwork. First, it offers a way to visualize different tinnitus subtypes. Second, it allows us to link those subtypes to an intervention and ask, “Did it work?” not just based on whether the patient says they feel better, but whether something objective in the body changed, too. That’s how we’ll know we’re truly making progress.
So, what’s different about tinnitus from the common cold or cancer is that, before now, we didn’t have a physiological way to identify what’s going on? It’s subjective and self-reported?
“The study provides a new way of thinking about what’s causing tinnitus. We wanted to come up with a measure that would relate to someone’s severity and not just distinguish them from someone without tinnitus.”
That’s right. It puts us back into the 19th century or 18th century. With any other neurological disorder, like epilepsy, you can measure a seizure or a stroke. With Parkinson’s, you have the neuroimaging and can do an objective measurement of motor behavior.
There aren’t many disorders that are truly hidden, where you can’t use outputs or inputs to shine light on the ghost in the machine. Chronic pain is first and foremost in that category — it’s even more common than tinnitus.
And for both of these conditions, you need an objective measure. For chronic pain, all they have is, “How bad is your pain today, on a scale of one to 10?” That’s the value of this metric: It predicts the individual severity scores that come from the questionnaire.
Can you describe the measure that you’ve documented?
The study provides a new way of thinking about what’s causing tinnitus. We wanted to come up with a measure that would relate to someone’s severity and not just distinguish them from someone without tinnitus.
We also wanted to avoid a measurement that could only be done in a specialized research hospital with expensive equipment. We want to measure these things with equipment that could feasibly wind up in a typical hearing health clinic.
Our idea is that when you or I go about our day, our brain is always surveilling the environment for possible threats so we can defend ourselves, flee, or freeze in place. Those systems are designed to get your conscious attention because you need to be aware of a possible threat.
If those systems are co-opted in the tinnitus-generation network, that would explain why you can’t put it out of mind: because you’ve incorporated the system that is designed to always elicit conscious awareness. If these networks identify a threat, they engage the sympathetic nervous system — fight, flight, or freeze — and you get, among other things, pupil dilation and increased galvanic skin response.
So, if people with severe tinnitus have their auditory threat evaluation system stuck in overdrive, then we could present emotionally evocative sounds that span a range: neutral sounds, like a typewriter; pleasant sounds, like a giggling baby; and sounds that almost everybody finds unpleasant, like an intense fit of coughing.
We expected that people with more severe tinnitus would have an overly robust response to a broad class of sounds, their sympathetic nervous system would report all of these sounds as possible threats.
How do you link that to an objective measure?
Obviously, we control our faces to communicate our emotional status, but our faces also involuntarily move to reflect our evaluation of events — pleasant or unpleasant — and our internal state of being — sad or happy. A lot of studies have examined facial movements when presented with images intended to cause happiness or fear, but nobody’s looked at facial movements when presented with sounds. We did and found that sounds do elicit facial movements.
“When we looked at people with severe tinnitus and sound sensitivity, there was a very clear difference.”
If the sound is pleasant, in a neurotypical person there’s more facial movement around the mouth. If the sound is unpleasant, you get movement in the brow, squeezing the eyes.
When we looked at people with severe tinnitus and sound sensitivity, there was a very clear difference. Their faces didn’t move. They had a blunted affect across the board, from pleasant to neutral to unpleasant. There was a diminished response to all.
Nobody’s ever measured it before. Nobody’s ever thought about the face and its connection with tinnitus. But that ended up being far and away the most informative measurement to predict an individual’s tinnitus severity.
There was a pupil response, too?
Yes, the pupil is part of the sympathetic nervous system. It’s wired into the fight, flight, or freeze system. The pupil dilates when the sympathetic nervous system is activated and, in our work, the pupil over-dilated to the sounds that the face was under-moving to.
They’re mirror images of each other. They were providing different perspectives on someone’s severity. If you use them together, you can better predict somebody’s tinnitus severity than if you used just one. The face is by far the most informative.
How could this be used as a tool?
The first FDA-approved device for tinnitus is available for prescription, but there’s controversy about how effective it is. One of the issues is that they use the same subjective questionnaires to evaluate results. Every time a tinnitus intervention is identified, people ask, “Is it placebo?”
My lab is focused on developing new therapies, so these results are an important milestone. We can incorporate them into our interventional studies. We want to migrate to a video-based system so we can make high-quality measurements faster, with less specialized equipment. We might get it into clinical use if doctors can subtype a tinnitus patient as severe or mild in their office with an objective measure.
Elena Luchkina is a research scientist in the Department of Psychology.Photo by Grace DuVal
Science & Tech
Out of sight but not out of mind
By 15 months, children can learn names of objects they’ve never seen, study says
Saima Sidik
Harvard Correspondent
June 16, 2025
6 min read
Love, quantum mechanics, yesterday’s weather — humans readily discuss these and many other things they can
By 15 months, children can learn names of objects they’ve never seen, study says
Saima Sidik
Harvard Correspondent
6 min read
Love, quantum mechanics, yesterday’s weather — humans readily discuss these and many other things they cannot see. Infants start to develop this ability early, new research suggests. Even 15-month-olds can define nouns without seeing their corresponding objects, according to work performed by Elena Luchkina, a research scientist in Elizabeth Spelke’s lab at the Harvard Department of Psychology, and Sandra Waxman, a professor of psychology and director of the Infant and Child Development Center at Northwestern University.
In this edited conversation with the Gazette, Luchkina discusses how she infers what infants are thinking, why her work could help treat learning difficulties, and whether the ability to discuss the unseen sets us apart in the animal kingdom.
Are humans the only animals that talk about things they can’t see?
That’s debatable, and the answer depends on who you ask. There’s evidence that great apes can communicate about things that aren’t around, but in a limited way. For example, if you show an ape an object and then rapidly hide it, they may point to the place where they’ve just seen it. Or they can request food that is not currently around them. But this is not the same as how we, humans, can communicate about absent or invisible things via language.
For example, if I describe my favorite mug, I can give you all kinds of details that aren’t obvious from its appearance, like that my sister gave it to me and that she bought it at the corner store. Scientists haven’t observed nonhuman animals communicating about hidden objects or abstract concepts in such depth.
“The capacity to represent an unseen object and learn its name might be a building block for communication about more sophisticated abstract concepts.”
But we’re not born with this ability. By their first birthdays, most kids can do what apes do — point to the former locations of things they’ve seen recently, like a ball that their parent has just hidden. This is a big leap forward. Yet, being able to refer to recently seen things is different from being able to refer to unseen or abstract things. Kids usually develop this ability by age 2, and then they start talking about things like absent caregivers and what’s going to happen tomorrow. I hope to understand how and when this capacity emerges.
How did you figure out the age at which infants can learn the meanings of new words without seeing their corresponding objects?
We’re working with children who are too young to say more than the odd word here and there, so we tracked their eye movements to infer what they know and think.
During the training portion of the experiment, we showed infants a video of an actress who looked over her shoulder and named objects that popped up on a screen behind her. For example, if an apple appeared, she’d say, “Look, it’s an apple!” She did that three times, naming three objects from a particular category, like fruits. The fourth time, the object popped up behind her body where the infant couldn’t see it. Instead of using the real name of the fruit, she used a nonsense word, like, “Look, it’s a blicket!”
Finally, during the test, a screen popped up with two objects — one was a fruit that we thought would be unfamiliar to most infants in our study, such as a dragon fruit. The other was an unrelated item such as an ottoman or a car. Then we said to the infant, “Find the blicket!,” and we tracked how long the infant looked at each object. If an infant looked at the fruit longer than the unrelated item, we inferred that they understood a blicket to be a type of fruit, even without seeing it, because the other three items were fruits.
We repeated the procedure a few times with different categories of objects, and control conditions helped us gain confidence in the results.
And what did you learn?
What was really interesting was that 15-month-olds were able to find the blicket, but not 12-month-olds. That could be because 12-month-olds don’t have the attention span or memory capacity to complete the task yet. Or 12-month-olds may not have developed the ability to form a mental image of an object without ever having seen it, whereas 15-month-olds are mature enough to do it.
When scientists tried to answer this question in the past, their research suggested that infants had to be 19-24 months old before they could attach a word to an unseen object. So we’ve found that infants have this ability at a younger age than was previously thought.
In the paper, you compare an infant’s ability to spot the blicket to an adult’s ability to discuss some pretty sophisticated concepts, like justice or the square root of negative one. What’s the connection?
It’s true — the infants won’t be discussing imaginary numbers anytime soon. But the capacity to represent an unseen object and learn its name might be a building block for communication about more sophisticated abstract concepts. Similar to adults, infants in our study are creating mental representations of things they can’t currently see and holding such representations in mind while mapping words to them.
What’s next for this research?
We’d like to know whether the infants who are best at finding the blicket at 15 months are also most able to learn from language alone at 24 months. If that’s the case, it could mean that an early ability to learn about unseen objects gives infants an important foundation for learning from language later in life.
What kind of applications might this work have?
If infants who perform better on our task at 15 months also are better at learning from language at 24 months — and that’s truly because of the ability to learn from language and not other factors like memory or attention — then the find-the-blicket task might be useful as a diagnostic tool for difficulties with learning from language. Diagnosing these problems early could give us the opportunity to design interventions that would smooth out those difficulties before they lead to trouble in school.
The research described in this story received funding from the National Institutes of Health.
An ETH spin-off, Swiss Vascular, has developed anatomically exact silicone models of cerebral vessels. Through this development, researchers will not only reduce the amount of animal experimentation required but also improve the standard of medical training for complex medical procedures.
An ETH spin-off, Swiss Vascular, has developed anatomically exact silicone models of cerebral vessels. Through this development, researchers will not only reduce the amount of animal experimentation required but also improve the standard of medical training for complex medical procedures.
MacMillan has been named a Distinguished Scholar in the Princeton Branch of the Ludwig Institute for Cancer Research, which leverages the power of collaboration across scientific fields to advance cancer research.
MacMillan has been named a Distinguished Scholar in the Princeton Branch of the Ludwig Institute for Cancer Research, which leverages the power of collaboration across scientific fields to advance cancer research.
After winning a nail-biting East of England final, which was held as part of the Cambridge Festival in April 2025, Spatika went on to represent the East of England in the UK final with her presentation on Time Travel with Your Brain. She will now go on to represent the UK in the International Final taking place live at CERN Science Gateway in Switzerland to mark the 20 year anniversary of the competition.
“I was so surprised I won!”, said Spatika. “The other communicators were fantastic and we
After winning a nail-biting East of England final, which was held as part of the Cambridge Festival in April 2025, Spatika went on to represent the East of England in the UK final with her presentation on Time Travel with Your Brain. She will now go on to represent the UK in the International Final taking place live at CERN Science Gateway in Switzerland to mark the 20 year anniversary of the competition.
“I was so surprised I won!”, said Spatika. “The other communicators were fantastic and we travelled through so many topics from planets to parasites and more!”.
Spatika took part in FameLab because she enjoyed talking about science to non-scientists and bringing some meaning to the complex work taking place in the labs. “I wanted a chance to bring humour into the science, because most of the times science is presented in professional environments, it’s all very serious”, added Spatika.
“I would recommend FameLab for anyone who’s even a tiny bit interested in knowing what happens to science when it’s let out in the wild!”
Claudia Antolini, Public Engagement Manager at the University of Cambridge said, “We are delighted for Spatika to represent the UK at the International FameLab final. Both at the East of England regional competition and the UK final Spatika gave outstanding performances, scientifically accurate but also extremely engaging with wise-cracking humour. We wish her the best of luck and we look forward to cheering her on for the International Final.”
The FameLab final will be streamed live from CERN on YouTube.
Spatika Jayaram is a PhD student and Gates Cambridge Scholar in the Department of Physiology, Development and Neuroscience and Magdalene College. In her research, she looks at social and emotional behaviours emerging across development, and how regions within the prefrontal cortex contribute to their regulation. Her supervisor is Professor Angela Roberts.
FameLab was created by Cheltenham Festivals in 2005 and is the largest science communication competition and training programme in the world. Participants have just three minutes to convey a scientific concept of their choice to an audience and expert panel of judges with no presentations and limited props.
Earlier this month, Cambridge PhD student Spatika Jayaram was crowned the winner of the FameLab 2025 UK final at this year’s Cheltenham Science Festival.
This year's prize winners are;
Dr Tore Butlin - Department of Engineering/Queens' College:
Tore has played a key role in reshaping the engineering course content and led the design of the new IA mechanics syllabus.
Dr Alexander Carter - Institute of Continuing Education/Fitzwilliam College:
As Academic Director for Philosophy & Interdisciplinary Studies, Alexander leads a broad-ranging portfolio of undergraduate and postgraduate courses in philosophy, creativity theory and research skil
Dr Tore Butlin - Department of Engineering/Queens' College:
Tore has played a key role in reshaping the engineering course content and led the design of the new IA mechanics syllabus.
Dr Alexander Carter - Institute of Continuing Education/Fitzwilliam College:
As Academic Director for Philosophy & Interdisciplinary Studies, Alexander leads a broad-ranging portfolio of undergraduate and postgraduate courses in philosophy, creativity theory and research skills.
Dr Nicholas Evans - Department of Clinical Neurosciences/Wolfson College:
Nicholas has demonstrated an impressive commitment to medical education at the Clinical School for over a decade. As a mentor he has also shown a keen interest in student welfare.
Dr James Fergusson - Department of Applied Mathematics and Theoretical Physics:
James is an outstanding lecturer who brings outstanding passion to everything he does. He has been heavily involved in establishing and supporting the new MPhil in Data Intensive Science.
Dr Marta Halina - Department of History and Philosophy of Science/Selwyn College:
Marta has almost single-handedly overhauled the History and Philosophy of Science Tripos making it a more sought after course. She has led a major restructuring of the MPhil course and has introduced the increasingly popular module, AI in healthcare.
Paul Hoegger - University Language Centre/Faculty of Modern and Medieval Languages and Linguistics/Fitzwilliam College:
Paul is a teacher of German much respected by generations of students. Over the years he has created several new courses including one on German literature through the ages and one on the poetry of Schubert.
Dr Kate Hughes - Department of Veterinary Medicine/Girton College:
Kate makes a valued contribution to Years 4 - 6 of the veterinary programme. She led the design of a new final year rotation in anatomic pathology for which she is educational lead.
Dr Mairi Kilkenny - Department of Biochemistry/Queen's College:
Mairi delivers innovative and creative teaching with the Department of Biochemistry often incorporating digital media to stimulate the interest of her students. She's also a supervisor for several Colleges.
Dr Ewa Marek - Department of Chemical Engineering and Biotechnology/Jesus College:
Ewa is a valued lecturer, supervisor and Director of Studies. Passionate about sustainability, Ewa developed a new Part 1A course which introduces the topic in the context of chemical and biochemical engineering.
Dr Isabelle McNeill - Faculty of Modern and Medieval Languages and Linguistics/Trinity Hall:
Isabelle was a passionate and outstanding teacher who made vibrant contributions to French and to Film and Screen within the Faculty. A co-founder and trustee of the Cambridge Film Trust, Isabelle was made aware of her prize two days before she sadly passed away in February. She will be much missed by colleagues and students alike.
Dr Ali Meghji - Department of Sociology/Sidney Sussex College:
Ali has been instrumental in creating a whole new Tripos paper in the Department (Empire, Colonialism, Imperialism). As a teacher, he repeatedly receives glowing comments from students on the clarity of his exposition, the contemporary relevance of his topics, and his effective use of technology.
Dr Liam Saddington - Department of Geography/Lucy Cavendish College:
Liam was recruited as Training and Skills Director for the Tripos with a remit to oversee the quantitative and qualitative research training across the degree. He has led new innovations, such as creating a museum field trip for first-year students, organising a 'COP Cambridge' simulation for second-year students, and developing the dissertation 'research carousel'.
Dr Christopher Tilmouth - Faculty of English:
Chris' visionary leadership has reshaped both undergraduate and postgraduate education at Cambridge. As Director of Undergraduate Studies, Chris introduced critical reforms to enhance student progression.
Dr Juliet Usher-Smith - Department of Public Health and Primary Care/Emmanuel College:
Juliet has made important contributions to the Department through direct teaching, supervision and mentoring and goes the extra mile to foster a culture in which teaching and learning is valued by all.
The winners were presented with their awards by the University's Vice-Chancellor, Professor Deborah Prentice, at a ceremony also attended by Senior Pro-Vice-Chancellor (Education and Environmental Sustainability), Professor Bhaskar Vira. He said “The Pilkington Prize Award ceremony is one of my favourite events in the University calendar. It’s always deeply satisfying to see hard-working staff recognised for their commitment and dedication to teaching and learning. We all know that behind every great student is a great teacher and I feel privileged to work alongside such excellent colleagues.”
A total of fourteen dedicated and talented staff have been awarded the Pilkington Prize this year. The annual prizes are awarded in the name of Sir Alastair Pilkington to acknowledge excellence in teaching and to recognise the contribution each individual makes to a Department or Faculty.
It’s always deeply satisfying to see hard-working staff recognised for their commitment and dedication to teaching and learning
Cambridge Zero Director Professor Emily Shuckburgh (Fellow of Darwin, Trinity alumna) has received a CBE for services to Climate Science and to the Public Communication of Climate Science.
"I am deeply honoured to accept this recognition, which is a reflection of the collective efforts of many scientists, communicators, educators, and advocates who strive every day to make climate science accurate, accessible and actionable at a time when honesty, clarity and urgency are more important than eve
Cambridge Zero Director Professor Emily Shuckburgh (Fellow of Darwin, Trinity alumna) has received a CBE for services to Climate Science and to the Public Communication of Climate Science.
"I am deeply honoured to accept this recognition, which is a reflection of the collective efforts of many scientists, communicators, educators, and advocates who strive every day to make climate science accurate, accessible and actionable at a time when honesty, clarity and urgency are more important than ever,” Professor Shuckburgh said.
Alongside leading the University of Cambridge’s major climate change initiative, Cambridge Zero, Emily is also Professor of Environmental Data Science at the Department of Computer Science and Technology. Her primary research is focused on the application of artificial intelligence to climate science and in this context she is Academic Director of the Institute of Computing for Climate Science, and co-Director of the UKRI Centre for Doctoral Training on the Application of AI to the study of Environmental Risks (AI4ER).
Professor Gordon Dougan (Fellow of Wolfson College), an Emeritus Professor who continues to work in the University’s Department of Medicine, and former Director of the Infection Health Challenge area at Wellcome, UK, has been awarded a CBE for services to Vaccines and to Global Health.
Professor Dougan is an internationally recognised expert in vaccinology, global health and infections. He was Head of Pathogens at the Wellcome Sanger Institute (WTSI) for over a decade and worked in the pharmaceutical industry (Wellcome Foundation/GSK) for part of his career, developing novel vaccines and other medicines. He has worked as an advisor to health agencies, industry, academia and regulatory agencies. He is an expert on the molecular basis of infection with a strong emphasis on pathogenic mechanisms/immunity, genomics, disease tracking and antibiotic resistance. He is currently President of the Microbiology Society of the UK.
He said: “I am delighted to receive this important recognition for my work and the people I have worked with and for. Applying science to the benefit of people and health is what I have been working toward throughout my career. I can recommend this path to anyone.”
Details of University alumni who are recognised in the King's Birthday Honours will be published on the University's alumni website.
The University extends its congratulations to all academics, staff and alumni who have received an honour.
Academics at the University of Cambridge are among those featured in the King's Birthday Honours 2025, which recognises the achievements and contributions of people across the UK.
The Facility for Laboratory Reconnection Experiments (FLARE) represents the next generation of research into fundamental plasma physics at the U.S. Department of Energy’s Princeton Plasma Physics Laboratory.
The Facility for Laboratory Reconnection Experiments (FLARE) represents the next generation of research into fundamental plasma physics at the U.S. Department of Energy’s Princeton Plasma Physics Laboratory.
Cornell Tech researchers unveiled a robot that helps emergency room teams locate life-saving supplies faster, revealing how design can shape collaboration under pressure.
Cornell Tech researchers unveiled a robot that helps emergency room teams locate life-saving supplies faster, revealing how design can shape collaboration under pressure.
Lew-Williams is a professor of history and the director of the Program in Asian American Studies. The prize recognizes “outstanding scholarship that illuminates the past and seeks to anchor public discourse in a deeper understanding of history.”
Lew-Williams is a professor of history and the director of the Program in Asian American Studies. The prize recognizes “outstanding scholarship that illuminates the past and seeks to anchor public discourse in a deeper understanding of history.”
In 2021, Hilal Mohammadzai was set to begin his senior year at the American University of Afghanistan (AUAF), where he was working toward a bachelor’s degree in computer science. However, that August, the Taliban seized control of the Afghani government, and Mohammadzai’s education — along with that of thousands of other students — was put on hold. “It was an uncertain future for all of the students,” says Mohammadzai.Mohammadzai ultimately did receive his undergraduate degree from AUAF in May 2
In 2021, Hilal Mohammadzai was set to begin his senior year at the American University of Afghanistan (AUAF), where he was working toward a bachelor’s degree in computer science. However, that August, the Taliban seized control of the Afghani government, and Mohammadzai’s education — along with that of thousands of other students — was put on hold.
“It was an uncertain future for all of the students,” says Mohammadzai.
Mohammadzai ultimately did receive his undergraduate degree from AUAF in May 2023 after months of disruption, and after transferring and studying for one semester at the American University of Bulgaria. As he was considering where to take his studies next, Mohammadzai heard about the MIT Emerging Talent Certificate in Computer and Data Science. His friend graduated from the program in early 2023 and had only positive things to say about the education, community, and network.
Creating opportunities to learn data science
Part of MIT Open Learning, Emerging Talent develops global education programs for talented individuals from challenging economic and social circumstances, equipping them with the knowledge and tools to advance their education and careers.
The Certificate in Computer and Data Science is a year-long online learning program for talented learners including refugees, migrants, and first-generation low-income students from historically marginalized backgrounds and underserved communities worldwide. The curriculum incorporates computer science and data analysis coursework from MITx, professional skill building, capstone projects, mentorship and internship options, and opportunities for networking with MIT’s global community.
Throughout his undergraduate coursework, Mohammadzai discovered an affinity for data visualization, and decided that he wanted to pursue a career in data science. The opportunity with the Emerging Talent program presented itself at the perfect time. Mohammadzai applied and was accepted into the 2023-24 cohort, earning a spot out of a pool of over 2,000 applicants.
“I thought it would be a great opportunity to learn more data science to build up on my existing knowledge,” he says.
Expanding and deepening his data science knowledge
Mohammadzai’s acceptance to the Emerging Talent program came around the same time that he began an MBA program at the American University of Central Asia in Kyrgyzstan. For him, the two programs made for a perfect pairing.
“When you have data science knowledge, you usually also require domain knowledge — whether it's in business or economics — to help with interpreting data and making decisions,” he says. “Analyzing the data is one piece, but understanding how to interpret that data and make a decision usually requires domain knowledge.”
Although Mohammadzai had some data science experience from his undergraduate coursework, he learned new skills and new approaches to familiar knowledge in the Emerging Talent program.
“Data structures were covered at university, but I found it much more in-depth in the MIT courses,” said Mohammadzai. “I liked the way it was explained with real-life examples.”
He worked with students from different backgrounds, and used Github for group projects. Mohammadzai also took advantage of personal agency and job-readiness workshops provided by the Emerging Talent team, such as how to pursue freelancing and build a mentorship network — skills that he has taken forward in life.
“I found it an exceptional opportunity,” he says. “The courses, the level of education, and the quality of education that was provided by MIT was really inspiring to me.”
Applying data skills to real-world situations
After graduating with his Certificate in Computer and Data Science, Mohammadzai began a paid internship with TomorrowNow, which was facilitated by introductions from the Emerging Talent team. Mohammadzai’s resume and experience stood out to the hiring team, and he was selected for the internship program.
TomorrowNow is a climate-tech nonprofit that works with philanthropic partners, commercial markets, R&D organizations, and local climate adaptation efforts to localize and open source weather data for smallholder farmers in Africa. The organization builds public capacity and facilitates partnerships to deploy and sustain next-generation weather services for vulnerable communities facing climate change, while also enabling equitable access to these services so that African farmers can optimize scarce resources such as water and farm inputs.
Leveraging philanthropy as seed capital, TomorrowNow aims to de-risk weather and climate technologies to make high-quality data and products available for the public good, ultimately incentivizing the private sector to develop products that reach last-mile communities often excluded from advancements in weather technology.
For his internship, Mohammadzai worked with TomorrowNow climatologist John Corbett to understand the weather data, and ultimately learn how to analyze it to make recommendations on what information to share with customers.
“We challenged Hilal to create a library of training materials leveraging his knowledge of Python and targeting utilization of meteorological data,” says Corbett. “For Hilal, the meteorological data was a new type of data and he jumped right in, working to create training materials for Python users that not only manipulated weather data, but also helped make clear patterns and challenges useful for agricultural interpretation of these data. The training tools he built helped to visualize — and quantify — agricultural meteorological thresholds and their risk and potential impact on crops.”
Although he had previously worked with real-world data, working with TomorrowNow marked Mohammadzai’s first experience in the domain of climate data. This area presented a unique set of challenges and insights that broadened his perspective. It not only solidified his desire to continue on a data science path, but also sparked a new interest in working with mission-focused organizations. Both TomorrowNow and Mohammadzai would like to continue working together, but he first needs to secure a work visa.
Without a visa, Mohammadzai cannot work for more than three to four hours a day, which makes securing a full-time job impossible. Back in 2021, the American University of Afghanistan filed a P-1 (priority one) asylum case for their students to seek resettlement in the United States because of the potential threat posed to them by the Taliban.
Mohammadzai’s hearing was scheduled for Feb. 1, but it was postponed after the program was suspended early this year.
As Mohammadzai looks to the end of his MBA program, his future feels uncertain. He has lived abroad since 2021 thanks to student visas and scholarships, but until he can secure a work visa he has limited options. He is considering pursuing a PhD program in order to keep his student visa status, while he waits on news about a more permanent option.
“I just want to find a place where I can work and contribute to the community.”
Hilal Mohammadzai graduated from the MIT Emerging Talent Certificate in Computer and Data Science as part of his path to pursue a career in data science.
Researchers from the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, along with colleagues from KK Women's and Children's Hospital (KKH), have developed a first-of-its-kind device to profile the immune function of newborns. Using a single drop of blood, the BiophysicaL Immune Profiling for Infants (BLIPI) system provides real-time insights into newborns’ immune responses, enabling the early detection of severe inflammatory conditions and allo
Using a single drop of blood, the BiophysicaL Immune Profiling for Infants (BLIPI) system provides real-time insights into newborns’ immune responses, enabling the early detection of severe inflammatory conditions and allowing for timely interventions. This critical innovation addresses the urgent and unmet need for rapid and minimally invasive diagnostic tools to protect vulnerable newborns, especially those born prematurely.
Critical unmet need in newborn care
Premature infants are particularly vulnerable to life-threatening conditions such as sepsis and necrotizing enterocolitis (NEC). Newborn sepsis — a bloodstream infection occurring in the first weeks of life — is a major global health challenge, causing up to 1 million infant deaths worldwide annually. NEC, a serious intestinal disease that causes severe inflammation, is one of the leading causes of death in premature babies — up to 50 percent of low-birth-weight neonates who get NEC do not survive. Infants can show vague symptoms, making diagnosis of these conditions challenging. However, both conditions can worsen rapidly and require immediate medical intervention for the best chance of recovery.
Current diagnostic methods to detect and prevent these serious conditions in newborns rely on large blood samples — up to 1 milliliter, a significant quantity of blood for a newborn — and lengthy laboratory processes. This is not ideal for newborns whose total blood volume may be as little as 50 ml among very premature infants less than 28 weeks old, which limits repeated or high-volume sampling and can potentially lead to anemia and other complications. At the same time, conventional tests — such as blood cultures or inflammatory panels — may take hours to days to return actionable results, limiting prompt targeted clinical interventions. The novel BLIPI device addresses these challenges by requiring only 0.05 ml of blood and delivering results within 15 minutes.
Revolutionizing newborn care
In a study, “Whole blood biophysical immune profiling of newborn infants correlates with immune responses,” published in Pediatric Research, the researchers demonstrated how BLIPI leverages microfluidic technology to measure how immune cells change when fighting infection by assessing their size and flexibility. Unlike conventional tests that only look for the presence of germs, BLIPI directly shows how a baby’s immune system is responding. The cell changes that BLIPI detects align with standard tests doctors rely on, including C-reactive protein levels, white blood cell counts, and immature-to-total neutrophil ratios. This testing format can quickly reveal whether a baby’s immune system is fighting an infection.
In the study, BLIPI was used to screen 19 infants at multiple time points — eight full-term and 11 preterm — and showed clear differences in how immune cells looked and behaved between the babies. Notably, when one premature baby developed a serious blood infection, the device was able to detect significant immune cell changes. This shows its potential in detecting infections early.
BLIPI is a portable device that can give results at the ward or the neonatal intensive care units, removing the need for transporting blood samples to the laboratory and making it easily implementable in resource-limited or rural health-care settings. Significantly, BLIPI needs just one drop of blood, and 1/20 the blood volume than what existing methods require. These swift results can help clinicians make timely, lifesaving decisions in critical situations such as sepsis or NEC, where early treatment is vital.
“Our goal was to create a diagnostic tool that works within the unique constraints of neonatal care — minimal blood volume, rapid turnaround, and high sensitivity. BLIPI represents a major step forward by providing clinicians with fast, actionable immune health data using a noninvasive method, where it can make a real difference for newborns in critical care,” says Kerwin Kwek, research scientist at SMART CAMP and SMART AMR, and co-lead author of the study.
“BLIPI exemplifies our vision to bridge the gap between scientific innovation and clinical need. By leveraging microfluidic technologies to extract real-time immune insights from whole blood, we are not only accelerating diagnostics but also redefining how we monitor immune health in fragile populations. Our work reflects a new paradigm in point-of-care diagnostics: rapid, precise, and patient-centric,” says MIT Professor Jongyoon Han, co-lead principal investigator at SMART CAMP, principal investigator at SMART AMR, and corresponding author of the paper.
“KKH cares for about two-thirds of all babies born weighing less than 1,500 grams in Singapore. These premature babies often struggle to fight infections with their immature immune systems. With BLIPI, a single prick to the baby’s finger or heel can give us rapid insights into the infant’s immune response within minutes. This allows us to tailor treatments more precisely and respond faster to give these fragile babies the best chance at a healthy start not just in their early days, but throughout their lives,” says Assistant Professor Yeo Kee Thai, senior consultant at the Department of Neonatology at KKH, and senior author of the study.
Future research will focus on larger clinical trials to validate BLIPI’s diagnostic accuracy across diverse neonatal populations with different age groups and medical conditions. The researchers also plan to refine the device’s design for widespread adoption in hospitals globally, bringing a much-needed diagnostic solution for vulnerable infants at their cot side. Beyond hospitals, pharmaceutical companies and researchers may also leverage BLIPI in clinical trials to assess immune responses to neonatal therapies in real-time — a potential game-changer for research and development in pediatric medicine.
The research conducted at SMART is supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program. This collaboration exemplifies how Singapore brings together institutions as part of interdisciplinary, multi-institution efforts to advance technology for global impact. The work from KKH was partially supported by the Nurturing Clinician Scientist Scheme under the SingHealth Duke-NUS Academic Clinical Programme.
Left to right: Genevieve Llanora of KKH; Kerwin Kwek of SMART, holding the BLIPI device device with Assistant Professor Yeo Kee Thai of KKH; and Nicholas Ng of SMART. “BLIPI exemplifies our vision to bridge the gap between scientific innovation and clinical need,” says MIT Professor Jongyoon Han (not pictured), on the BLIPI project. “Our work reflects a new paradigm in point-of-care diagnostics: rapid, precise, and patient-centric.”
Researchers from the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, along with colleagues from KK Women's and Children's Hospital (KKH), have developed a first-of-its-kind device to profile the immune function of newborns. Using a single drop of blood, the BiophysicaL Immune Profiling for Infants (BLIPI) system provides real-time insights into newborns’ immune responses, enabling the early detection of severe inflammatory conditions and allo
Using a single drop of blood, the BiophysicaL Immune Profiling for Infants (BLIPI) system provides real-time insights into newborns’ immune responses, enabling the early detection of severe inflammatory conditions and allowing for timely interventions. This critical innovation addresses the urgent and unmet need for rapid and minimally invasive diagnostic tools to protect vulnerable newborns, especially those born prematurely.
Critical unmet need in newborn care
Premature infants are particularly vulnerable to life-threatening conditions such as sepsis and necrotizing enterocolitis (NEC). Newborn sepsis — a bloodstream infection occurring in the first weeks of life — is a major global health challenge, causing up to 1 million infant deaths worldwide annually. NEC, a serious intestinal disease that causes severe inflammation, is one of the leading causes of death in premature babies — up to 50 percent of low-birth-weight neonates who get NEC do not survive. Infants can show vague symptoms, making diagnosis of these conditions challenging. However, both conditions can worsen rapidly and require immediate medical intervention for the best chance of recovery.
Current diagnostic methods to detect and prevent these serious conditions in newborns rely on large blood samples — up to 1 milliliter, a significant quantity of blood for a newborn — and lengthy laboratory processes. This is not ideal for newborns whose total blood volume may be as little as 50 ml among very premature infants less than 28 weeks old, which limits repeated or high-volume sampling and can potentially lead to anemia and other complications. At the same time, conventional tests — such as blood cultures or inflammatory panels — may take hours to days to return actionable results, limiting prompt targeted clinical interventions. The novel BLIPI device addresses these challenges by requiring only 0.05 ml of blood and delivering results within 15 minutes.
Revolutionizing newborn care
In a study, “Whole blood biophysical immune profiling of newborn infants correlates with immune responses,” published in Pediatric Research, the researchers demonstrated how BLIPI leverages microfluidic technology to measure how immune cells change when fighting infection by assessing their size and flexibility. Unlike conventional tests that only look for the presence of germs, BLIPI directly shows how a baby’s immune system is responding. The cell changes that BLIPI detects align with standard tests doctors rely on, including C-reactive protein levels, white blood cell counts, and immature-to-total neutrophil ratios. This testing format can quickly reveal whether a baby’s immune system is fighting an infection.
In the study, BLIPI was used to screen 19 infants at multiple time points — eight full-term and 11 preterm — and showed clear differences in how immune cells looked and behaved between the babies. Notably, when one premature baby developed a serious blood infection, the device was able to detect significant immune cell changes. This shows its potential in detecting infections early.
BLIPI is a portable device that can give results at the ward or the neonatal intensive care units, removing the need for transporting blood samples to the laboratory and making it easily implementable in resource-limited or rural health-care settings. Significantly, BLIPI needs just one drop of blood, and 1/20 the blood volume than what existing methods require. These swift results can help clinicians make timely, lifesaving decisions in critical situations such as sepsis or NEC, where early treatment is vital.
“Our goal was to create a diagnostic tool that works within the unique constraints of neonatal care — minimal blood volume, rapid turnaround, and high sensitivity. BLIPI represents a major step forward by providing clinicians with fast, actionable immune health data using a noninvasive method, where it can make a real difference for newborns in critical care,” says Kerwin Kwek, research scientist at SMART CAMP and SMART AMR, and co-lead author of the study.
“BLIPI exemplifies our vision to bridge the gap between scientific innovation and clinical need. By leveraging microfluidic technologies to extract real-time immune insights from whole blood, we are not only accelerating diagnostics but also redefining how we monitor immune health in fragile populations. Our work reflects a new paradigm in point-of-care diagnostics: rapid, precise, and patient-centric,” says MIT Professor Jongyoon Han, co-lead principal investigator at SMART CAMP, principal investigator at SMART AMR, and corresponding author of the paper.
“KKH cares for about two-thirds of all babies born weighing less than 1,500 grams in Singapore. These premature babies often struggle to fight infections with their immature immune systems. With BLIPI, a single prick to the baby’s finger or heel can give us rapid insights into the infant’s immune response within minutes. This allows us to tailor treatments more precisely and respond faster to give these fragile babies the best chance at a healthy start not just in their early days, but throughout their lives,” says Assistant Professor Yeo Kee Thai, senior consultant at the Department of Neonatology at KKH, and senior author of the study.
Future research will focus on larger clinical trials to validate BLIPI’s diagnostic accuracy across diverse neonatal populations with different age groups and medical conditions. The researchers also plan to refine the device’s design for widespread adoption in hospitals globally, bringing a much-needed diagnostic solution for vulnerable infants at their cot side. Beyond hospitals, pharmaceutical companies and researchers may also leverage BLIPI in clinical trials to assess immune responses to neonatal therapies in real-time — a potential game-changer for research and development in pediatric medicine.
The research conducted at SMART is supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program. This collaboration exemplifies how Singapore brings together institutions as part of interdisciplinary, multi-institution efforts to advance technology for global impact. The work from KKH was partially supported by the Nurturing Clinician Scientist Scheme under the SingHealth Duke-NUS Academic Clinical Programme.
Left to right: Genevieve Llanora of KKH; Kerwin Kwek of SMART, holding the BLIPI device device with Assistant Professor Yeo Kee Thai of KKH; and Nicholas Ng of SMART. “BLIPI exemplifies our vision to bridge the gap between scientific innovation and clinical need,” says MIT Professor Jongyoon Han (not pictured), on the BLIPI project. “Our work reflects a new paradigm in point-of-care diagnostics: rapid, precise, and patient-centric.”
MIT’s Environmental Solutions Initiative (ESI), a pioneering cross-disciplinary body that helped give a major boost to sustainability and solutions to climate change at MIT, will close as a separate entity at the end of June. But that’s far from the end for its wide-ranging work, which will go forward under different auspices. Many of its key functions will become part of MIT’s recently launched Climate Project. John Fernandez, head of ESI for nearly a decade, will return to the School of Archit
MIT’s Environmental Solutions Initiative (ESI), a pioneering cross-disciplinary body that helped give a major boost to sustainability and solutions to climate change at MIT, will close as a separate entity at the end of June. But that’s far from the end for its wide-ranging work, which will go forward under different auspices. Many of its key functions will become part of MIT’s recently launched Climate Project. John Fernandez, head of ESI for nearly a decade, will return to the School of Architecture and Planning, where some of ESI’s important work will continue as part of a new interdisciplinary lab.
When the ideas that led to the founding of MIT’s Environmental Solutions Initiative first began to be discussed, its founders recall, there was already a great deal of work happening at MIT relating to climate change and sustainability. As Professor John Sterman of the MIT Sloan School of Management puts it, “there was a lot going on, but it wasn’t integrated. So the whole added up to less than the sum of its parts.”
ESI was founded in 2014 to help fill that coordinating role, and in the years since it has accomplished a wide range of significant milestones in research, education, and communication about sustainable solutions in a wide range of areas. Its founding director, Professor Susan Solomon, helmed it for its first year, and then handed the leadership to Fernandez, who has led it since 2015.
“There wasn’t much of an ecosystem [on sustainability] back then,” Solomon recalls. But with the help of ESI and some other entities, that ecosystem has blossomed. She says that Fernandez “has nurtured some incredible things under ESI,” including work on nature-based climate solutions, and also other areas such as sustainable mining, and reduction of plastics in the environment.
Desiree Plata, director of MIT’s Climate and Sustainability Consortium and associate professor of civil and environmental engineering, says that one key achievement of the initiative has been in “communication with the external world, to help take really complex systems and topics and put them in not just plain-speak, but something that’s scientifically rigorous and defensible, for the outside world to consume.”
In particular, ESI has created three very successful products, which continue under the auspices of the Climate Project. These include the popular TIL Climate Podcast, the Webby Award-winning Climate Portal website, and the online climate primer developed with Professor Kerry Emanuel. “These are some of the most frequented websites at MIT,” Plata says, and “the impact of this work on the global knowledge base cannot be overstated.”
Fernandez says that ESI has played a significant part in helping to catalyze what has become “a rich institutional landscape of work in sustainability and climate change” at MIT. He emphasizes three major areas where he feels the ESI has been able to have the most impact: engaging the MIT community, initiating and stewarding critical environmental research, and catalyzing efforts to promote sustainability as fundamental to the mission of a research university.
Engagement of the MIT community, he says, began with two programs: a research seed grant program and the creation of MIT’s undergraduate minor in environment and sustainability, launched in 2017.
ESI also created a Rapid Response Group, which gave students a chance to work on real-world projects with external partners, including government agencies, community groups, nongovernmental organizations, and businesses. In the process, they often learned why dealing with environmental challenges in the real world takes so much longer than they might have thought, he says, and that a challenge that “seemed fairly straightforward at the outset turned out to be more complex and nuanced than expected.”
The second major area, initiating and stewarding environmental research, grew into a set of six specific program areas: natural climate solutions, mining, cities and climate change, plastics and the environment, arts and climate, and climate justice.
These efforts included collaborations with a Nobel Peace Prize laureate, three successive presidential administrations from Colombia, and members of communities affected by climate change, including coal miners, indigenous groups, various cities, companies, the U.N., many agencies — and the popular musical group Coldplay, which has pledged to work toward climate neutrality for its performances. “It was the role that the ESI played as a host and steward of these research programs that may serve as a key element of our legacy,” Fernandez says.
The third broad area, he says, “is the idea that the ESI as an entity at MIT would catalyze this movement of a research university toward sustainability as a core priority.” While MIT was founded to be an academic partner to the industrialization of the world, “aren’t we in a different world now? The kind of massive infrastructure planning and investment and construction that needs to happen to decarbonize the energy system is maybe the largest industrialization effort ever undertaken. Even more than in the recent past, the set of priorities driving this have to do with sustainable development.”
Overall, Fernandez says, “we did everything we could to infuse the Institute in its teaching and research activities with the idea that the world is now in dire need of sustainable solutions.”
Fernandez “has nurtured some incredible things under ESI,” Solomon says. “It’s been a very strong and useful program, both for education and research.” But it is appropriate at this time to distribute its projects to other venues, she says. “We do now have a major thrust in the Climate Project, and you don’t want to have redundancies and overlaps between the two.”
Fernandez says “one of the missions of the Climate Project is really acting to coalesce and aggregate lots of work around MIT.” Now, with the Climate Project itself, along with the Climate Policy Center and the Center for Sustainability Science and Strategy, it makes more sense for ESI’s climate-related projects to be integrated into these new entities, and other projects that are less directly connected to climate to take their places in various appropriate departments or labs, he says.
“We did enough with ESI that we made it possible for these other centers to really flourish,” he says. “And in that sense, we played our role.”
As of June 1, Fernandez has returned to his role as professor of architecture and urbanism and building technology in the School of Architecture and Planning, where he directs the Urban Metabolism Group. He will also be starting up a new group called Environment ResearchAction (ERA) to continue ESI work in cities, nature, and artificial intelligence.
What’s it like to study at ETH Zurich? Once a year, secondary school students spend a week getting a first taste of university life. In this video, Juliana Pfammatter from Valais and Gianin Prevost from Graubünden share their experiences in lecture halls, laboratories and on construction sites.
What’s it like to study at ETH Zurich? Once a year, secondary school students spend a week getting a first taste of university life. In this video, Juliana Pfammatter from Valais and Gianin Prevost from Graubünden share their experiences in lecture halls, laboratories and on construction sites.
A new lab-grown material has revealed that some of the effects of ageing in the heart may be slowed and even reversed. The discovery could open the door to therapies that rejuvenate the heart by changing its cellular environment, rather than focusing on the heart cells themselves.The research, published recently in Nature Materials, was carried out by a team led by Assistant Professor Jennifer Young from the Department of Biomedical Engineering in the College of Design and Engineering (CDE) at t
A new lab-grown material has revealed that some of the effects of ageing in the heart may be slowed and even reversed. The discovery could open the door to therapies that rejuvenate the heart by changing its cellular environment, rather than focusing on the heart cells themselves.
The team focused on the extracellular matrix (ECM)—the complex framework that surrounds and supports heart cells. This net-like scaffolding made of proteins and other components holds cells in place and sends chemical signals that guide how the cells function.
As the heart ages, the ECM becomes stiffer and its biochemical composition changes. These changes can trigger harmful activity in heart cells, contributing to scarring, loss of flexibility, and reduced function.
“Most ageing research focuses on how cells change over time,” said Asst Prof Young. “Our study looks instead at the ECM and how changes in this environment affect heart ageing.”
To investigate this, the team developed a hybrid biomaterial called DECIPHER (DECellularized In Situ Polyacrylamide Hydrogel-ECM hybrid), made by combining natural heart tissue with a synthetic gel to closely mimic the stiffness and composition of the ECM.
“Until now, it’s been difficult to pinpoint which of these changes—physical stiffness or biochemical signals—play the bigger role in driving this decline, because they usually happen at the same time,” said Avery Rui Sun, PhD student in NUS CDE and MBI, and first author of the study.
“The DECIPHER platform solves this problem, allowing researchers to independently control the stiffness and the biochemical signals presented to the cells—something no previous system using native tissue has been able to do.”
When the team placed aged heart cells onto DECIPHER scaffolds that mimicked ‘young’ ECM cues, they found that the cells began to behave more like young cells—even when the material remained stiff. Closer investigation revealed that this included a shift in gene activity across thousands of genes associated with ageing and cell function.
In contrast, young cells placed on ‘aged’ ECM began to show signs of dysfunction, even if the scaffold was soft.
“This showed us that the biochemical environment around aged heart cells matters more than stiffness, while young cells take in both cues,” said Asst Prof Young.
“Even when the tissue was very stiff, as it typically is in aged hearts, the presence of ‘young’ biochemical signals was enough to push aged cells back toward a healthier, more functional state,” she added. “This suggests that if we can find a way to restore these signals in the ageing heart, we might be able to reverse some of the damage and improve how the heart functions over time.”
“On the other hand, for young heart cells, we found that higher stiffness can cause them to prematurely ‘age’, suggesting that if we target specific aspects of ECM ageing, we might slow or prevent heart dysfunction over time.”
While the work is still in the research phase, the team says their findings open up a new direction for therapies aimed at preserving or restoring heart health during ageing—by targeting the ECM itself. They hope the method could also be adapted to study other tissues affected by ageing and disease. Beyond the heart, the team believes the DECIPHER method could be applied to study ageing and disease in other organs as well due to the major role of the ECM in cell function across all our tissues.
“Many age-related diseases involve changes in tissue stiffness—not just in the heart,” said Asst Prof Young. “For example, the same approach could be applied to kidney and skin tissue, and it could be adapted to study conditions like fibrosis or even cancer, where the mechanical environment plays a major role in how cells behave.”
University of Melbourne and Western Health researchers have developed a new artificial intelligence tool to prevent cancer patients from receiving incorrect doses of chemotherapy.
University of Melbourne and Western Health researchers have developed a new artificial intelligence tool to prevent cancer patients from receiving incorrect doses of chemotherapy.
Rio Baran's senior thesis research sought to explore what few scientists have attempted. “Rio is an exceptional undergraduate student,” said her adviser, geosciences professor John Higgins.
Rio Baran's senior thesis research sought to explore what few scientists have attempted. “Rio is an exceptional undergraduate student,” said her adviser, geosciences professor John Higgins.
The season runs June 12 through Aug. 2, opening with a musical adaptation of "The Bridges of Madison County." This year's children's show is a kid-friendly version of “The Odyssey."
The season runs June 12 through Aug. 2, opening with a musical adaptation of "The Bridges of Madison County." This year's children's show is a kid-friendly version of “The Odyssey."
The program is funded by a portion of a gift from Princeton 1970 alumnus William Fung of Hong Kong that is designed to substantially increase the University’s engagement with scholars around the world and inspire ideas that transcend borders.
The program is funded by a portion of a gift from Princeton 1970 alumnus William Fung of Hong Kong that is designed to substantially increase the University’s engagement with scholars around the world and inspire ideas that transcend borders.
The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy-
The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.
Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.
Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).
Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.
Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.
“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).
This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.
Advanced steelmaking technologies could enable significant decarbonization of the iron and steel sector and improve the world’s chances of achieving long-term climate goals.
The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy-
The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.
Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.
Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).
Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.
Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.
“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).
This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.
Advanced steelmaking technologies could enable significant decarbonization of the iron and steel sector and improve the world’s chances of achieving long-term climate goals.
Why are there so many different kinds of animals and plants on Earth? One of biology’s big questions is how new species arise and how nature’s incredible diversity came to be.
Cichlid fish from Lake Malawi in East Africa offer a clue. In this single lake, over 800 different species have evolved from a common ancestor in a fraction of the time it took for humans and chimpanzees to evolve from their common ancestor.
What’s even more remarkable is that the diversification of cichlids happened all
Why are there so many different kinds of animals and plants on Earth? One of biology’s big questions is how new species arise and how nature’s incredible diversity came to be.
Cichlid fish from Lake Malawi in East Africa offer a clue. In this single lake, over 800 different species have evolved from a common ancestor in a fraction of the time it took for humans and chimpanzees to evolve from their common ancestor.
What’s even more remarkable is that the diversification of cichlids happened all in the same body of water. Some of these fish became large predators, others adapted to eat algae, sift through sand, or feed on plankton. Each species found its own ecological niche.
The researchers looked at the DNA of over 1,300 cichlids to see if there’s something special about their genes that might explain this rapid evolution. “We discovered that, in some species, large chunks of DNA on five chromosomes are flipped – a type of mutation called a chromosomal inversion,” said senior author Hennes Svardal from the University of Antwerp.
Normally, when animals reproduce, their DNA gets reshuffled in a process called recombination – mixing the genetic material from both parents. But this mixing is blocked within a chromosomal inversion. This means that gene combinations within the inversion are passed down intact without mixing, generation after generation, keeping useful adaptations together and speeding up evolution.
“It’s sort of like a toolbox where all the most useful tools are stuck together, preserving winning genetic combinations that help fish adapt to different environments,” said first author Moritz Blumer from Cambridge’s Department of Genetics.
These preserved sets of genes are sometimes called ‘supergenes. In Malawi cichlids, the supergenes seem to play several important roles. Although cichlid species can still interbreed, the inversions help keep species separate by preventing their genes from blending too much. This is especially useful in parts of the lake where fish live side by side – like in open sandy areas where there’s no physical separation between habitats.
The genes inside these supergenes often control traits that are key for survival and reproduction – such as vision, hearing, and behaviour. For example, fish living deep in the lake (down to 200 meters) need different visual abilities than those near the surface, require different food, and need to survive at higher pressures. Their supergenes help maintain those special adaptations.
“When different cichlid species interbred, entire inversions can be passed between them – bringing along key survival traits, like adaptations to specific environments, speeding up the process of evolution,” said Blumer.
The inversions also frequently act as sex chromosomes, helping determine whether a fish becomes male or female. Since sex chromosomes can influence how new species form, this opens new questions about how evolution works.
“While our study focused on cichlids, chromosomal inversions aren’t unique to them,” said co-senior author Professor Richard Durbin, from Cambridge’s Department of Genetics. “They’re also found in many other animals — including humans — and are increasingly seen as a key factor in evolution and biodiversity.”
“We have been studying the process of speciation for a long time,” said Svardal. “Now, by understanding how these supergenes evolve and spread, we’re getting closer to answering one of science’s big questions: how life on Earth becomes so rich and varied.”
Researchers have found that chunks of ‘flipped’ DNA can help fish quickly adapt to new habitats and evolve into new species, acting as evolutionary ‘superchargers’.
Many types of bacteria produce a protein complex that injects toxins into neighbouring cells to eliminate competitors. For the first time, researchers at ETH Zurich and Eawag discovered that these killer bacteria also use this weapon to feed on their neighbours.
Many types of bacteria produce a protein complex that injects toxins into neighbouring cells to eliminate competitors. For the first time, researchers at ETH Zurich and Eawag discovered that these killer bacteria also use this weapon to feed on their neighbours.
Eden Fisher (from left), Amelia Heymach, and Addie Kelsey.Photo illustration by Liz Zonarich/Harvard Staff
Campus & Community
3 friends, 104 miles, and a tradition of taking the scenic route
Trio marked each year with a walk to a different New England state
Eileen O’Grady
Harvard Staff Writer
June 12, 2025
5 min read
At 4:30 a.m., with headlamps and backpacks strapped on, Amelia Heyma
3 friends, 104 miles, and a tradition of taking the scenic route
Trio marked each year with a walk to a different New England state
Eileen O’Grady
Harvard Staff Writer
5 min read
At 4:30 a.m., with headlamps and backpacks strapped on, Amelia Heymach, Eden Fisher, and Addie Kelsey stepped out of Currier House and began walking — southwest through Watertown and Newton, bound for the Connecticut border. With just two weeks until Commencement, the three seniors had one last goal to cross off their College bucket list: a 47-mile walk to commemorate their undergraduate journey.
For Heymach, Fisher, and Kelsey, who became friends on the first day of freshman year, long walks have become a tradition to mark the end of each academic year: As sophomores, they walked to New Hampshire; as juniors to Rhode Island. These ultra-walks might seem extreme, but this trio says they are a way to spend time together while testing their endurance, trust, and commitment.
“People do big walks at transition points in their lives, and it’s not by accident,” Heymach said. “It’s a great opportunity to reflect and center yourself and to think about your goals for the future and reflect on the past. There’s something about those walks that’s so conducive to that sort of thinking, to put away your devices, to be outside, to connect with the lands around you.”
This year the friends trekked 47 miles, from Currier House to Connecticut.
None of them are strangers to trekking. Heymach, a statistics concentrator with a secondary in global health and health policy, hiked the Camino de Santiago in Spain with her mom during a gap year. Fisher, a joint concentrator in integrative biology and math with a secondary in studies of women, gender, and sexuality, is a lifelong runner with four marathons and an ultramarathon under her belt. Kelsey, a psychology concentrator with secondary in integrative biology, took many walks with her family growing up.
The plans for their 25-mile walk to New Hampshire formed spontaneously after a late-night study session during final exam week of their sophomore fall. They had heard of some students who had walked to the state line, and wanted to see if they could do it, too.
“We went home to sleep, and then the next morning woke up super early and started,” Kelsey said. “There was no preparation.”
Their route, suggested by Google Maps, took them through Lexington, Burlington, and Billerica, sometimes through residential neighborhoods, other times through industrial areas, often with no sidewalks. The December sun was setting as they walked through Lowell, and then crossed the border into Pelham, New Hampshire, with a time of just under 10 hours.
“We got there, and it was dark. We had headlamps.” Heymach recalled. “Cars were going fast. We were on the side of the curb just running to the finish.”
For the return trip they called an Uber.
“It’s so funny Ubering back in like 40 minutes after you spent the entire day from before the sun has risen to sunset, walking,” Kelsey said. “You’re just seeing everything you passed flash by.”
Junior year they walked 32 miles to Rhode Island, heading south through Massachusetts towns including Dedham, Norwood, Walpole, and Wrentham. They crossed the border into Cumberland, Rhode Island, after 12 hours.
To pass the time while walking, the three sing songs and read aloud to each other from books they find in Little Free Libraries. They pack snacks and usually stop to buy pastries and sandwiches along the way.
“It’s a great opportunity to reflect and center yourself and to think about your goals for the future and reflect on the past.”
Amelia Heymach
All three agreed that at the end of a semester of rigorous academics and extracurriculars, something as simple as a long walk is a welcome change.
“I love the speed of a walk,” Heymach said. “Things can feel super fast-paced for many months at a time here. It’s saying, ‘No, we’re going to go our 2.5 miles-per-hour pace for as long as we want to.’”
Fisher agreed. “It allows you to slow down and enjoy things in a different way. I’ve been learning to appreciate a different pace.”
Last month, with Commencement on the horizon, the friends decided it was time to attempt Connecticut. On May 16 they headed southwest through Wellesley, Holliston, Milford, Mendon, and Douglas. To keep momentum, they developed mind games to stay mentally fresh. One rule? They weren’t allowed to ask how much farther they had to go.
“With walking there’s the physical aspect to it — you feel like your legs may be falling off toward the end — but a lot of it is mental,” Heymach said. “You have to tell yourself you’re not walking with a destination, you’re walking indefinitely.”
The final miles took them along the Southern New England Trunkline Trail through Douglas State Forest after dark, where they heard peeping frogs and spotted a beaver. They crossed the border into Thompson, Connecticut, around 10 p.m.
Next year they will be in three different countries, with Heymach doing community health work in Ecuador, Kelsey studying psychology at the University of Cambridge in England, and Fisher at Harvard’s Graduate School of Education.
“I have faith in the friendships that I have, and trust that we’ll be there to support each other regardless of where we are,” Heymach said. “I’m excited to do hikes with them in the future.”
They already have a few in mind: Kelsey has her eye on the Camino de Santiago, Heymach on the Lone Star Hiking Trail in Texas, and Fisher on the North-South Trail in Rhode Island. Heymach and Fisher are also interested in the Long Trail in Vermont.
Plus, there are a few more nearby states they haven’t reached by foot.
“We haven’t gone to New York yet. Or Maine,” Kelsey added. “But New York is far, so we’ll have to split it into a couple of days next time.”
Arts & Culture
From ‘joyous’ to ‘erotically engaged’ to ‘white-hot angry’
Stephanie Burt’s new anthology rounds up 51 works by queer and trans poets spanning generations
Eileen O’Grady
Harvard Staff Writer
June 12, 2025
5 min read
Niles Singer/Harvard Staff Photographer
As Stephanie Burt sees it, queer lyric poetry mirrors the patterns of queer life. She offers many examples in “Super
“I chose only poems I admire and wanted to write an essay about,” said Burt, Donald P. and Katherine B. Loker Professor of English, who paired each work with an original essay providing analysis and historical context. “I looked for stylistic range, from concise to effusive, rhymed-formal to free and chaotic, weird-and-challenging to apparently pellucid. I also looked for emotional range, from joyous to erotically engaged to white-hot angry, quietly curious, resolved, mournful, inviting, shy, and extroverted.”
Burt’s chosen poets span generations, from Frank O’Hara (1926-1966) to Logan February (born in 1999), and address major moments in modern queer history — from Paul Monette’s “The Worrying” (1988) responding to the HIV/AIDS crisis, to Jackie Kay’s “Mummy and donor and Deirdre” (1991), which explores the increasing visibility of queer families.
The anthology also reflects a wide geographic range, from Puerto Rican writer Roque Salas Rivera to Singaporean writer Stephanie Chan. Some names, like Audre Lorde and Adrienne Rich, will be familiar to many readers, while others may offer new discoveries.
In this edited conversation with the Gazette, Burt discusses shifts in the poetry landscape, the thrill of discovering new voices, and the power of poetry to capture historic moments.
Could you talk more about the notion of time in LGBTQIA+ poetry?
Many of us grew up with a very clear, normative set of expectations about how life works: You’re a child, then a teen, and then an adult, and then you’re old. You hang out in same-sex friend groups, and then you date, and then you “get serious,” and then you get married and have kids and raise kids, ideally 2.6 of them.
Often queer time, as encapsulated and addressed in queer poems, works differently. You don’t “grow up” if growing up means abandoning your intimate same-sex attachments in favor of straight-passing dating. Or you, as an adult, derive your energy from the kinds of exciting parties adults are supposed to abandon. Or maybe you go through puberty twice and feel (or act) like a baffled, excitable teen when you’re an adult. Even if you do end up monogamously connected to one stable partner for decades, as several of my poets did, the poetry about that connection works differently: It can feel more hard-won or feel like a former secret.
What changes do we see in LGBTQIA+ poetry after the 1969 Stonewall Uprising?
Post-Stonewall, we see more people come out. More people celebrate, openly, long-term relationships. People raise kids and attend to a next generation. During the 1970s, lesbian poets celebrate lesbian-only or women-only spaces; later on, not so much. During the 1980s and early 1990s, a whole generation — especially, but not only, gay men — lose half or more of their friends to HIV/AIDS, which is still a killer in much of the world but shows up less often in poems.
During the 2010s, people come out as trans or nonbinary. Also during the 2000s and 2010s, more people in the Global South come out and write poems about how their multiple identities and forms of belonging intersect. Sometimes they feel welcome where they grew up, and sometimes — as is the case with Logan February, whose poem “Prayer of the Slut” (2020) is included in the book — they do not and cannot.
Did you make any new discoveries while compiling this book?
Yes, I had never encountered Cherry Smyth, whose poem “In the South That Winter” (2001) is included, or Logan February. I hadn’t paid enough close attention to Melvin Dixon (“The Falling Sky,” written 1992, published 1995), nor to Judy Grahn (“Carol, in the park, chewing on straws,” 1970) until this book gave me the chance to re-examine their work.
Do you see poetry as a tool for documenting LGBTQIA+ history?
All art forms document history because all works of art come from historic moments. I picked the poems here because I loved them all, but some parts of global queer history in English don’t show up because I didn’t or couldn’t find awesome poems about them: queer liberation in the Republic of Ireland, for example, or the relationship between the Filipino diaspora and Filipino/a/x bakla identities. For that latter I recommend Rob Macaisa Colgate’s amazeballs book, published after “Super Gay Poems” went to press.
I did happily — but also sadly — include poems tied to big-deal historic moments such as Paul Monette’s AIDS Coalition to Unleash Power verse [“The Worrying”] which really needs another look these days. That said, some of my favorite poems have nothing to do with big chapters in public history — they construct aesthetic refuges from it, or alternatives to it. May Swenson’s “Found in Diary Dated May 29, 1973” (first book publication 2003), for example, an allegory of lesbian love via plant roots. You can connect it to history if you like, but that’s not what the poem invites us to do first or last.
What do you hope readers will take away from this anthology?
New favorite poets! New favorite poems! And a sense of the queer and trans and ace and intersex and pan and so on possibilities out there today, even at this troubled time for so many of us — possibilities that include fears and catastrophes but also resilience, community, solidarity, and joy.
Campus & Community
Turning 2 decades of discovery into impact
Isaac Kohlberg to step down as senior associate provost and chief technology development officer
Kirsten Mabry
Harvard Office of Technology Development
June 12, 2025
4 min read
After 20 years of service to the University, Isaac Kohlberg will step down from his role as Harvard University’s senior associate provost and chief
Isaac Kohlberg to step down as senior associate provost and chief technology development officer
Kirsten Mabry
Harvard Office of Technology Development
4 min read
After 20 years of service to the University, Isaac Kohlberg will step down from his role as Harvard University’s senior associate provost and chief technology development officer at the end of this year, concluding an extraordinary chapter that has significantly influenced Harvard’s vision and strategy for advancing research for the public good.
Kohlberg’s two decades at Harvard have been dedicated to one mission: advancing the University’s discoveries into practical applications that deliver impactful solutions for society. He played a key role in building broad corporate relationships and developing commercialization strategies to further advance Harvard research. Kohlberg joined Harvard in 2005 and established the Office of Technology Development (OTD) by consolidating the University’s technology transfer efforts into one centralized program. This initiative aims to advance innovations emerging from Harvard labs through licensing and the creation of startups, while also fostering corporate collaborations with industrial entities.
Kohlberg built a team with extensive industry experience and strong technical backgrounds. Together, they established a proactive culture in which OTD team members serve as the primary point of contact for Harvard researchers, facilitating industry collaborations and venture creation. Under Kohlberg’s leadership, there was a shift in how Harvard approached the commercialization of innovations developed in its labs; the University’s strategy became more supportive and engaged, increasing the pace of startup formation and pursuing industry relationships to advance the University’s science.
“Investments in science help advance knowledge, fuel progress, and spur economic development. That sense of mission runs through Harvard’s innovation enterprise, and we are grateful for the leadership role Isaac played in supporting a thriving culture of discovery and innovation,” said Harvard University Provost John Manning.
Kohlberg is widely recognized for his vision in forging robust collaborations between academia and industry, as well as for establishing funding mechanisms that bridge the critical development gap between academic research and real-world therapies or applications. Under his leadership, OTD established three accelerator funds: the Blavatnik Biomedical Accelerator, the Grid Accelerator, and the Climate and Sustainability Translational Fund. These initiatives provide essential funding and business development support to translational research projects. Consequently, these accelerators have played a crucial role in enabling research teams to commercialize their discoveries, resulting in the creation of numerous startups and licensed technologies derived from foundational research conducted at Harvard.
These accelerators have supported hundreds of research projects, launched numerous startups, and drawn millions in industry and venture investment back to Harvard. To name just a few, research backed by these accelerators has resulted in innovative cancer therapies licensed to biotech firms, a startup developing a new class of antibiotics created by Harvard chemists, and Harvard-developed technologies that are now featured in products worldwide.
Additionally, under Kohlberg’s leadership, the University formed landmark collaborations with global companies. These relationships brought not only research funding — industry-sponsored research more than doubled during his tenure — but also infused new energy and opportunities into innovations being developed at Harvard.
In the past five years alone, the advancement of Harvard research has resulted in the launch of 96 startups raising nearly $2.8 billion, more than 2,000 reports of innovations, 897 U.S. patents held by Harvard, and $300 million in research funding through industry partnerships.
Crucially, Kohlberg was also dedicated to safeguarding academic independence and making a broad societal impact. Harvard’s approach to corporate relationships — an effort that has grown into a robust collaboration between OTD, the Office of the Vice Provost for Research, and others — ensures that faculty members determine research agendas and that discoveries, even when developed commercially, remain accessible for humanitarian licensing or use in developing countries.
Kohlberg leaves behind a program recognized nationally as a model of excellence — one that combines deep expertise in industry relations, commercialization, and venture creation with a distinctly Harvard-style sense of duty to society.
“As I begin the next chapter of my life, I am deeply proud of all we have accomplished together,” Kohlberg reflected. “Harvard’s community has shown what’s possible when great ideas are met with entrepreneurial spirit, smart funding, and a commitment to the public good. The next wave of discovery and impact is just beginning.”
Before joining Harvard in 2005, Kohlberg was the CEO of Tel Aviv University’s Economic Corporation. From 1989 to 2000, he held various roles supporting innovation at New York University, including vice provost, vice president for industrial liaison, and head of the NYU School of Medicine’s Office of Science and Research. From 1982 to 1989, he served as CEO of YEDA, the commercial arm of the Weizmann Institute of Science in Israel.
Harvard will launch a search for Kohlberg’s successor in the coming weeks, and Kohlberg has committed to serve as an adviser to the University.
The Tahir Foundation, established by prominent Indonesian philanthropist Dato’ Sri Professor Dr Tahir, Chairman of Mayapada Group, has pledged S$2 million to the National University of Singapore (NUS) and Nanyang Technological University, Singapore (NTU Singapore) in support of Indonesian students at the two universities. The gift was announced at a ceremony held on 12 June 2025, at NUS.Each university will receive S$1 million for student financial aid, enabling more Indonesian students, regardl
The Tahir Foundation, established by prominent Indonesian philanthropist Dato’ Sri Professor Dr Tahir, Chairman of Mayapada Group, has pledged S$2 million to the National University of Singapore (NUS) and Nanyang Technological University, Singapore (NTU Singapore) in support of Indonesian students at the two universities. The gift was announced at a ceremony held on 12 June 2025, at NUS.
Each university will receive S$1 million for student financial aid, enabling more Indonesian students, regardless of their financial circumstances, to benefit from a world-class education in Singapore and to reinforce the strong people-to-people ties between Indonesia and Singapore.
A shared vision for ASEAN’s future
The gift reflects a broader regional ambition: nurturing future leaders, strengthening ASEAN’s talent pipeline, and deepening collaboration across borders. It also affirms the universities’ shared commitment to inclusive education and regional impact.
Dato’ Sri Professor Dr Tahir, Chairman of Mayapada Group said: “I strongly believe that education is fundamental in uplifting society and transforming our collective future. NUS and NTU, as leading global universities, are well-positioned to contribute significantly to Singapore and the world. I hope this gift will enable Indonesian students to realise their potential and inspire them to give back to their communities.”
NUS President Professor Tan Eng Chye said: “At NUS, we believe education is transformative in empowering individuals and uplifting communities. This gift will open opportunities for more Indonesian students to pursue tertiary education at world-class institutions in Singapore, and help to build a more collaborative and resilient Southeast Asia region, by fostering ties, familiarity and connections between the youths of Indonesia and Singapore. We hope this gift will inspire others to also step forward to support and nurture the next generation of leaders in the region through education.”
NTU President Professor Ho Teck Hua said: “Dr Tahir’s gift reflects his belief in the power of education to transcend borders. It enables more talented Indonesian students to pursue their aspirations at NTU, opening doors to life-changing opportunities that can transform not just individuals, but entire communities. The gift embodies our shared conviction that education plays a pivotal role in nurturing future-ready talent and in fostering a more connected region.”
A lifelong commitment to education
Dr Tahir has long championed education as a key driver of social mobility. He studied on a scholarship at Nanyang University, which later merged with the University of Singapore in 1980 to form NUS. He graduated in 1976 with a Bachelor of Commerce and is recognised as a distinguished alumnus of both NUS and NTU.
Over the past two decades, Dr Tahir has been a steadfast supporter of education, contributing to student bursaries and scholarships, as well as research and academic programmes in Singapore. In 2012, he made a landmark S$30 million gift to the NUS Yong Loo Lin School of Medicine, his largest philanthropic gift to an educational institution. The gift helped advance medical research and academic programmes.
Dr Tahir also funded key initiatives such as the Tahir NTU-Universitas Airlangga Students Exchange Programme, which strengthens academic ties between Singapore and Indonesia, as well as the NTU Priorities Fund. In 2021, Dr Tahir made a gift in support of student bursaries, providing financial assistance to deserving NTU students and enabling them to pursue their education with greater peace of mind.
Investing in regional talent and collaboration
Strategically positioned in the heart of Southeast Asia, Singapore serves as a vital hub for regional education and collaboration. As the nation’s flagship university and top-ranked institution in Southeast Asia, NUS plays a leading role in advancing research, innovation and global partnerships across the region.
Situated in Singapore’s innovation corridor, NTU is consistently ranked among the world’s top young universities and is recognised for its excellence in sustainability, engineering, and interdisciplinary research. The University plays a pivotal role in advancing technological innovation, industry collaboration and talent development, contributing to Southeast Asia’s progress and global competitiveness.
Through this latest gift from the Tahir Foundation, more Indonesian students will have the opportunity to join the vibrant, diverse and intellectually dynamic communities at NUS and NTU. This will not only shape their personal and professional growth but will also contribute to deeper regional understanding and cooperation — laying the groundwork for a stronger, more inclusive Southeast Asia.
The Guardian calls it ‘shattering’. The Stage heralds it as a ‘challenging, artfully constructed indictment of Russian war crimes in Ukraine.’
Written by Anastasiia Kosodii and Josephine Burton, and directed by Burton, The Reckoning channels voices of Ukrainians across the country – a priest, a volunteer, a dentist, a security guard, a journalist – who are forced to confront the sudden horrors of invasion and occupation and to repair bonds of trust amid violence and fear. These voices are real,
Written by Anastasiia Kosodii and Josephine Burton, and directed by Burton, The Reckoning channels voices of Ukrainians across the country – a priest, a volunteer, a dentist, a security guard, a journalist – who are forced to confront the sudden horrors of invasion and occupation and to repair bonds of trust amid violence and fear. These voices are real, drawn from witness statements collected and conserved by the journalists and lawyers behind The Reckoning Project.
Rory Finnin, Professor of Ukrainian Studies and a Fellow of Robinson College at Cambridge, collaborated with Burton to help shape the play. His decades of research into Ukraine’s culture and society formed the basis for a grant in support of The Reckoning from the University of Cambridge’s AHRC Impact Starter Fund account.
“Our collaboration with Rory Finnin has been invaluable throughout the making of The Reckoning,” said Burton, who is also Artistic Director and Chief Executive of Dash Arts. “Rory’s insights into Ukraine’s past and present gave me deeper grounding as a director and co-writer and helped sharpen the questions the play asks of its audience.”
The Reckoning blends dynamic storytelling with movement, music, and food to forge new routes of solidarity and understanding with the audience. As Everything Theatre notes in a glowing review, “We leave not as passive spectators but as an active part of the struggle.”
Attendees share in a summer salad made over the course of the play by the Ukrainian and British cast – Tom Godwin, Simeon Kyslyi, Marianne Oldham, and Olga Safronova – who bring empathy, humour, and integrity to each scene. The conclusion of each performance features an invited speaker from the audience who comes to the stage to reckon with their own experience of the play from different ethical and intellectual perspectives.
Professor Finnin spoke on the play’s first night at the Arcola Theatre.
“Over three years into Russia’s full-scale invasion, we are too often tempted to turn our eyes away from Ukraine,” said Finnin. “But The Reckoning empowers us to look closely and to see with new purpose. It has been an incredible privilege to support a dynamic work of art that brings Ukrainian voices to the fore and challenges us to listen and respond to them, with urgency and moral clarity.”
The Reckoning is an intimate work of documentary theatre composed from a verified archive of witness testimonies chronicling Russia’s war of aggression. It is now playing at London’s Arcola Theatre to universal acclaim.
We are too often tempted to turn our eyes away from Ukraine
In Washington, where conversations about Russia often center on a single name, political science doctoral candidate Suzanne Freeman is busy redrawing the map of power in autocratic states. Her research upends prevailing narratives about Vladimir Putin’s Russia, asking us to look beyond the individual to understand the system that produced him.“The standard view is that Putin originated Russia’s system of governance and the way it engages with the world,” Freeman explains. “My contention is that
In Washington, where conversations about Russia often center on a single name, political science doctoral candidate Suzanne Freeman is busy redrawing the map of power in autocratic states. Her research upends prevailing narratives about Vladimir Putin’s Russia, asking us to look beyond the individual to understand the system that produced him.
“The standard view is that Putin originated Russia’s system of governance and the way it engages with the world,” Freeman explains. “My contention is that Putin is a product of a system rather than its author, and that his actions are very consistent with the foreign policy beliefs of the organization in which he was educated.”
That organization — the KGB and its successor agencies — stands at the center of Freeman’s dissertation, which examines how authoritarian intelligence agencies intervene in their own states’ foreign policy decision-making processes, particularly decisions about using military force.
Dismantling the “yes men” myth
Past scholarship has relied on an oversimplified characterization of intelligence agencies in authoritarian states. “The established belief that I’m challenging is essentially that autocrats surround themselves with ‘yes’ men,” Freeman says. She notes that this narrative stems in great part from a famous Soviet failure, when intelligence officers were too afraid to contradict Stalin’s belief that Nazi Germany wouldn’t invade in 1941.
Freeman’s research reveals a far more complex reality. Through extensive archival work, including newly declassified documents from Lithuania, Moldova, and Poland, she shows that intelligence agencies in authoritarian regimes actually have distinct foreign policy preferences and actively work to advance them.
“These intelligence agencies are motivated by their organizational interests, seeking to survive and hold power inside and beyond their own borders,” Freeman says.
When an international situation threatens those interests, authoritarian intelligence agencies may intervene in the policy process using strategies Freeman has categorized in an innovative typology: indirect manipulation (altering collected intelligence), direct manipulation (misrepresenting analyzed intelligence), preemption in the field (unauthorized actions that alter a foreign crisis), and coercion (threats against political leadership).
“By intervene, I mean behaving in some way that’s inappropriate in accordance with what their mandate is,” Freeman explains. That mandate includes providing policy advice. “But sometimes intelligence agencies want to make their policy advice look more attractive by manipulating information,” she notes. “They may change the facts out on the ground, or in very rare circumstances, coerce policymakers.”
From Soviet archives to modern Russia
Rather than studying contemporary Russia alone, Freeman uses historical case studies of the Soviet Union’s KGB. Her research into this agency’s policy intervention covers eight foreign policy crises between 1950 and 1981, including uprisings in Eastern Europe, the Sino-Soviet border dispute, and the Soviet-Afghan War.
What she discovered contradicts prior assumptions that the agency was primarily a passive information provider. “The KGB had always been important for Soviet foreign policy and gave policy advice about what they thought should be done,” she says. Intelligence agencies were especially likely to pursue policy intervention when facing a “dual threat:” domestic unrest sparked by foreign crises combined with the loss of intelligence networks abroad.
This organizational motivation, rather than simply following a leader’s preferences, drove policy recommendations in predictable ways.
Freeman sees striking parallels to Russia’s recent actions in Ukraine. “This dual organizational threat closely mirrors the threat that the KGB faced in Hungary in 1956, Czechoslovakia in 1968, and Poland from 1980 to 1981,” she explains. After 2014, Ukrainian intelligence reform weakened Russian intelligence networks in the country — a serious organizational threat to Russia’s security apparatus.
“Between 2014 and 2022, this network weakened,” Freeman notes. “We know that Russian intelligence had ties with a polling firm in Ukraine, where they had data saying that 84 percent of the population would view them as occupiers, that almost half of the Ukrainian population was willing to fight for Ukraine.” In spite of these polls, officers recommended going into Ukraine anyway.
This pattern resembles the KGB’s advocacy for invading Afghanistan using the manipulation of intelligence — a parallel that helps explain Russia’s foreign policy decisions beyond just Putin’s personal preferences.
Scholarly detective work
Freeman’s research innovations have allowed her to access previously unexplored material. “From a methodological perspective, it’s new archival material, but it’s also archival material from regions of a country, not the center,” she explains.
In Moldova, she examined previously classified KGB documents: huge amounts of newly available and unstructured documents that provided details into how anti-Soviet sentiment during foreign crises affected the KGB.
Freeman’s willingness to search beyond central archives distinguishes her approach, especially valuable as direct research in Russia becomes increasingly difficult. “People who want to study Russia or the Soviet Union who are unable to get to Russia can still learn very meaningful things, even about the central state, from these other countries and regions.”
From Boston to Moscow to MIT
Freeman grew up in Boston in an academic, science-oriented family; both her parents were immunologists. Going against the grain, she was drawn to history, particularly Russian and Soviet history, beginning in high school.
“I was always curious about the Soviet Union and why it fell apart, but I never got a clear answer from my teachers,” says Freeman. “This really made me want to learn more and solve that puzzle myself."
At Columbia University, she majored in Slavic studies and completed a master’s degree at the School of International and Public Affairs. Her undergraduate thesis examined Russian military reform, a topic that gained new relevance after Russia’s 2014 invasion of Ukraine.
Before beginning her doctoral studies at MIT, Freeman worked at the Russia Maritime Studies Institute at the U.S. Naval War College, researching Russian military strategy and doctrine. There, surrounded by scholars with political science and history PhDs, she found her calling.
“I decided I wanted to be in an academic environment where I could do research that I thought would prove valuable,” she recalls.
Bridging academia and public education
Beyond her core research, Freeman has established herself as an innovator in war-gaming methodology. With fellow PhD student Benjamin Harris, she co-founded the MIT Wargaming Working Group, which has developed a partnership with the Naval Postgraduate School to bring mid-career military officers and academics together for annual simulations.
Their work on war-gaming as a pedagogical tool resulted in a peer-reviewed publication in PS: Political Science & Politics titled “Crossing a Virtual Divide: Wargaming as a Remote Teaching Tool.” This research demonstrates that war games are effective tools for active learning even in remote settings and can help bridge the civil-military divide.
When not conducting research, Freeman works as a tour guide at the International Spy Museum in Washington. “I think public education is important — plus they have a lot of really cool KGB objects,” she says. “I felt like working at the Spy Museum would help me keep thinking about my research in a more fun way and hopefully help me explain some of these things to people who aren’t academics.”
Looking beyond individual leaders
Freeman’s work offers vital insight for policymakers who too often focus exclusively on autocratic leaders, rather than the institutional systems surrounding them. “I hope to give people a new lens through which to view the way that policy is made,” she says. “The intelligence agency and the type of advice that it provides to political leadership can be very meaningful.”
As tensions with Russia continue, Freeman believes her research provides a crucial framework for understanding state behavior beyond individual personalities. “If you're going to be negotiating and competing with these authoritarian states, thinking about the leadership beyond the autocrat seems very important.”
Currently completing her dissertation as a predoctoral fellow at George Washington University’s Institute for Security and Conflict Studies, Freeman aims to contribute critical scholarship on Russia’s role in international security and inspire others to approach complex geopolitical questions with systematic research skills.
“In Russia and other authoritarian states, the intelligence system may endure well beyond a single leader’s reign,” Freeman notes. “This means we must focus not on the figures who dominate the headlines, but on the institutions that shape them.”
As tensions with Russia continue, political science PhD candidate Suzanne Freeman believes her research provides a crucial framework for understanding state behavior beyond individual personalities. “If you're going to be negotiating and competing with these authoritarian states, thinking about the leadership beyond the autocrat seems very important,” she says.
In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened
In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.
“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”
“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.
The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.
Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.
Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:
“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”
The ethics of AI-generated social media content
As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.
In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.
“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”
Using AI to increase civil discourse online
“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.
Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”
The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.
Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”
A public think tank that considers all aspects of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.
In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.
“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.
MIT faculty presented their pioneering research that incorporates social, ethical, and technical considerations and expertise at the MIT Ethics of Computing Research Symposium. All of the projects were supported by seed grants established by the Social and Ethical Responsibilities of Computing.
In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened
In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.
“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”
“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.
The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.
Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.
Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:
“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”
The ethics of AI-generated social media content
As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.
In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.
“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”
Using AI to increase civil discourse online
“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.
Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”
The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.
Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”
A public think tank that considers all aspects of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.
In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.
“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.
MIT faculty presented their pioneering research that incorporates social, ethical, and technical considerations and expertise at the MIT Ethics of Computing Research Symposium. All of the projects were supported by seed grants established by the Social and Ethical Responsibilities of Computing.
A drone is launched in late May in the Zaporizhzhia region of Ukraine.Photo by Ukrinform/NurPhoto via AP
Nation & World
Why U.S. should be worried about Ukrainian attack on Russian warplanes
Audacious — and wildly successful — use of inexpensive drones against superior force can be used anywhere, against anyone
Christina Pazzanese
Harvard Staff Writer
June 11, 2025
9 min read
Ukraine
Why U.S. should be worried about Ukrainian attack on Russian warplanes
Audacious — and wildly successful — use of inexpensive drones against superior force can be used anywhere, against anyone
Christina Pazzanese
Harvard Staff Writer
9 min read
Ukraine stunned Russia — and the world — when it launched Operation Spider’s Web, an audacious drone attack on June 1 that damaged or destroyed dozens of Russian warplanes. In the days since, Russian President Vladimir Putin has responded by escalating aerial assaults, launching record drone and missile attacks on Ukraine.
Beyond its secrecy and complexity, military analysts say Ukraine’s remarkable success using inexpensive homemade drones against a larger, more formidable adversary ushers modern warfare into a new and potentially troubling era.
In this edited conversation, the Gazette spoke with Eric Rosenbach, a senior lecturer at Harvard Kennedy School and past executive co-director at the Belfer Center, about how drones are rapidly reshaping global conflicts. Rosenbach, a former Army intelligence officer, was chief of staff to U.S. Secretary of Defense Ashton B. Carter from 2015-2017, a job in which he advised on Russia policy and led efforts to improve innovation at the Defense Department.
Beyond its success, what was significant about Operation Spider’s Web?
I think the most significant thing is that it showed Ukraine’s ability to reach deep into Russia and to hit targets that have a very high level of strategic significance. The use of drone technology itself was important, but it was more about the fact that they were able to project power in a way that I’m sure deeply impacted Putin.
So not only the targets they hit, but also that Ukraine was met with no resistance from the Russians?
Exactly. The targets that they hit, they’re called strategic aircraft. Those are aircraft that are used to deliver nuclear weapons. They were also the type of aircraft that were used by the Russians to launch many of the high-end missile attacks against Ukraine for some of their hypersonic and long-range cruise missiles. So symbolically, that was super important but also had an important operational effect.
And if you look at some of the details about how the Ukrainians must have done this, it’s amazing. It will be an amazing spy thriller movie to watch the Ukrainians smuggling in drones over borders, probably loading them into trucks, driving the trucks probably within, I bet, five to 10 kilometers of these military bases, launching the drones through the roofs of these cargo trucks and then piloting them in to hit the right targets. They must have done an enormous amount of advanced intelligence work to pull this off.
Many observers say this was a watershed event that suggests we’ve entered a new era of modern warfare. Do you agree?
Yes. I think it’s important to look at the sophistication of the Ukrainian drone program and how it has evolved. In the very beginning of the war, they were using a lot of off-the-shelf drone technology, wiring some munitions to them, using those then to attack Russian tanks or infantry formations.
Now, it’s quite different: They’re designing their own drones, using a global supply chain to get the parts, including from China, and then doing things like 3D printing the drones themselves. The production line produces numbers that, quite frankly, are much higher than what the U.S. is producing right now or probably could produce.
Also, they’re really pushing forward the capabilities of these weapons in terms of what they can do, and how they’re utilized. And part of that is based on the artificial intelligence that they have been able to develop, and the data. Some of it is also just learning on the battlefield.
“What would really start to worry you is if there were drones that were very long-range and had full autonomy.”
What is — or should be — most worrisome for the U.S., for NATO countries, and others whose drone programs lag behind Ukraine and Russia?
Three points here, I would say. One is that the Taiwanese are very closely studying what the Ukrainians have done and will do in the future in terms of using pretty inexpensive autonomous weapons for both defense and offense. In particular, the way they hit deep inside Russia.
Xi Jinping has said that Taiwan will become part of China sometime — some people say by 2027. I doubt that, but there’s a really important reason that the Taiwanese would want the ability to strike deep with China, if the [People’s Liberation Army] launched a military action against the island. I just wrote a report on that a few months ago that talks about how the Taiwanese could learn from the Ukrainians to develop their own type of autonomous weapons.
I would say the second is the Europeans are very nervous about how advanced the Russians have become with these autonomous weapons. They ask, for example, “What if the Russians wanted to try to mount some type of limited, small incursion into one of the Baltic states to test Article 5 of NATO and the way that they were doing it was, in large part, through the use of autonomous weapons?” The European defense technology sector is not very well developed, less so than the United States, and far behind Ukraine. I think the Europeans recognize that.
For the U.S., there’s a pretty jarring homeland security takeaway from this, which is: Imagine there are some bad actors — it wouldn’t even have to be a nation state, but it could be a terrorist group — that decides they’re going to 3D print these within the United States, or go across the border, and mimic the Ukrainians with a high-profile attack. I know a lot of people recognize that, but this should really drive home that the U.S. is vulnerable to attacks like this.
How vulnerable?
It’s getting better. If you look at high-profile public events, they’re called national security events. There’s quite good technology in place to protect the president, for example, when he’s out and about, the Super Bowl, things that would be logical opportunities to attack.
It’s the lower-profile things that are a much softer target that still could have a big impact both on the American psyche and, probably, the economy.
What does the U.S. need to do to prevent an attack like this?
The United States will always be vulnerable to some degree from these attacks that are based on newer technology, whether it’s a cyberattack, there could be a space-based attack, and attack from autonomous weapons. The risk will never be zero.
So that’s also important to recognize when you read, for example, about a “golden dome” that will protect everything in the country. It’s just not realistic to think that we’ll ever be zero risk, fully protected from some magical defense technology.
So that means that we probably do need to invest more in counter-UAV (Unmanned Aerial Vehicle) homeland defense. And you see in the headlines of the last couple days that the U.S. is pulling more of the support we’ve given to Ukraine back to protect Americans, whether in the Middle East or even in the homeland, to try to do that.
How close are we to having something effective?
To a limited degree, there are pretty effective counter-UAV systems that have been developed, but they’re in very limited numbers to get up to the level that we would rest easier. I think it’s a years-not-months type of scenario.
What’s the potential global fallout from this demonstration by the Ukrainians?
From a geopolitical perspective, I think it makes clear that a peace agreement is nowhere near in the future. I’m sure the Russian response will be very heavy. The way Putin and the Russians have always reacted to something like this is with an overwhelming response of force. So that will be unfortunate. It probably will be one of the worst attacks we’ve seen against Ukraine from a technology perspective.
One thing to recognize about this is, although the operation was sophisticated, the drones were not fully autonomous. They were not completely reliant on AI, and they didn’t travel 5,000 miles. It was still a local-based operation with a pilot operating them to do the targeting. What would really start to worry you is if there were drones that were very long-range and had full autonomy — they were doing the identification, selection, and targeting of targets on their own, without a human in the loop.
Why would that be more worrisome?
The range is a significant limiting factor right now, both in defense and offense, when you’re using autonomous weapons. Think about the U.S., for example: If someone drove a boat just off the coast of the United States, and didn’t even have to go through the border. They had produced and put everything together there, and they could get even a couple hundred miles range, you could see how a lot of people in Washington, D.C., or other major metropolitan areas would be very vulnerable.
As I mentioned, true fully autonomous weapons would mean that a terrorist or nation-state could simply program targets, launch the killer drones, and then escape. Because that technology is still several years from being fully mature, the likelihood of this right now is low. The technology isn’t fully developed, hasn’t even really been tested extensively.
There is one thing to worry about: Throughout history, technologies that are lethal but have not been subject to a lot of testing end up having unintended consequences that sometimes are worse because of the catalytic effects in generating new classes of weapons.
Work & Economy
Remember when corporate America steered clear of politics on social media?
Elisabeth Kempf.Stephanie Mitchell/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
June 11, 2025
7 min read
Study finds Twitter surge starting in 2017, most of it Democratic-leaning by surprising range of firms, with negative effects on stock price
There was a time when cor
Remember when corporate America steered clear of politics on social media?
Elisabeth Kempf.
Stephanie Mitchell/Harvard Staff Photographer
Christina Pazzanese
Harvard Staff Writer
7 min read
Study finds Twitter surge starting in 2017, most of it Democratic-leaning by surprising range of firms, with negative effects on stock price
There was a time when corporate America was not very online. Most companies used social media for promoting products and services or engaging with consumers in a friendly fashion. Political posts on a company Twitter account were rare.
That all changed between 2012 and 2022, when the volume of partisan speech on Twitter (now called X) from large corporations surged, more than doubling beginning in 2017, according to a new National Bureau of Economic Research working paper.
Researchers said the spike was driven disproportionately by companies using language associated with Democratic politicians. The moves frequently had negative effects on company stock prices.
In this edited conversation, the paper’s co-author Elisabeth Kempf, Jaime and Raquel Gilinski Associate Professor of Business Administration, explains corporate Twitter’s abrupt shift.
It seems ubiquitous now, but corporations putting out partisan-sounding tweets is only a recent development. What did you find?
Partisan speech was very rare for companies. Less than 1 percent of all the tweets they sent between 2012 and 2017 would constitute what was, according to our measures, very partisan speech.
Wading into partisan, polarized issues can be tricky for companies. We saw the first big change in 2017 where both Democratic and Republican partisan speech picks up, and then this big asymmetry starting in 2019.
That was the part that I found the most surprising. I was expecting to see some companies also start to use more Republican-sounding speech — to see, essentially, evidence of polarization. It was quite surprising to see that everybody seemed to adopt more Democratic speech after 2019 and that it was happening across the board, companies in blue states, red states, companies that are in consumer-facing businesses, also B2B [business to business].
“It was quite surprising to see that everybody seemed to adopt more Democratic speech after 2019 and that it was happening across the board, companies in blue states, red states.”
How did you define partisan corporate speech?
We built on previous work by Jesse Shapiro, my colleague at Harvard, and his co-authors. They developed this methodology to look at partisan speech in Congress that identifies phrases that allow you to correctly guess a speaker’s political party.
We used their methodology and applied it to tweets sent by Democratic and Republican politicians. That allowed us to identify highly partisan phrases and then apply it to corporate speech. Essentially, phrases that sound like they could be coming from a Democratic or Republican politician is what we consider partisan speech.
What did you learn about how partisan corporate speech has changed over time?
The first fact is that partisan corporate speech has become more common over this time period, 2012 to 2022. The second fact is that this growth was really asymmetric. Starting in 2019, we see rapid growth in Democratic-sounding speech, while Republican-sounding speech stayed constant or, if anything, decreased toward the end. And third, when we look at stock returns around these partisan corporate statements, we see that they tend to be followed by negative abnormal stock returns.
That said, we also see a lot of heterogeneity depending on the investor composition. For example, if there are more funds with ESG [environmental, social, and governance] objectives, then we see that the stock return after a Democratic-sounding statement tends to be less negative.
Is it clear why these partisan Twitter statements suddenly escalated during this time?
In 2017, we see both Democratic- and Republican-sounding speech go up for the first time. We don’t have so much to say on that.
What we do have more to say on is when we start to see this divergence between Democratic- and Republican-sounding speech, which is in January 2019. That was when BlackRock chief executive Larry Fink put out a pretty influential letter, his “Dear CEOs” annual letter. He explicitly called on CEOs to be more vocal, take more stances on polarizing issues.
This would be another potential piece of evidence to suggest that large institutional investors may have also played a role in this. BlackRock is the world’s largest asset manager so there was a lot of news reporting that this had kicked off a lot of discussion in the business world.
Also in 2019, the Business Roundtable [a CEO lobbying association] came out saying shareholder value maximization should not be the sole purpose. So, I do think it was quite influential.
We also see that 2019 was a massive growth in terms of the assets under management with a sustainability mandate. This is when a big shift happened in the investment industry.
Recently, we’ve seen protest campaigns against Tesla and Bud Light spurred by perceived partisan corporate statements. Could robust consumer boycotts that threaten profits have contributed to declines in shares as opposed to just public or investor reaction to statements?
This was the finding we struggled the most to make sense of. One possibility is, as you say, that maybe there could be facts of this partisan speech that affect what we would call the cash flows in finance or the profits that the companies make. For example, it might be losing employees or losing customers. We argue that partisan speech can affect a third potential stakeholder group — investors — and how much they are willing to hold the stock.
We were looking at the 500 largest companies by market capitalization, so these are very big companies that need to raise capital from a pretty heterogeneous investor base. It’s hard for these large companies to raise capital only from Democrats or only from Republicans. And so, once you make a partisan statement that aligns with one group but not with the other, you might see precisely this negative stock price effect.
It is also difficult to explain the growth in Democratic-sounding speech with consumer or employee preferences alone. To be clear: We wouldn’t want to conclude that investor or consumer preferences did not play a role at all. But we thought it was worth pointing out that it was hard to explain everything just with consumers or just with employees.
Why is that? Because we see this rapid growth both for companies that are in consumer and non-consumer-facing industries. Boycotts could explain why maybe certain retail companies might adopt a certain speech, but we see it even in materials, which is a sector with companies that make metals, chemicals, coatings and things like this. It’s hard to think that there the consumer preferences for partisan speech would be super strong.
I thought employees could be quite relevant, but we saw the same trend in companies with employees located in very Democratic areas versus Republican areas. We also didn’t see that labor market tightness played a role. You might imagine that if you’re really in desperate need of talent, maybe you would engage more in this partisan speech if you thought it would help in hiring.
There were a couple of results that just made it hard to explain this with consumers or employees alone. Whereas, if you think about this investment channel, it could explain why we see it in companies that are based in blue areas, red areas. It could explain why this is such a broad-based phenomenon.
Are there other aspects of corporate speech ripe for further research?
Yes. I don’t think we have fully answered the question of why this big shift was happening then. We have suggestive evidence that investors could have played a role. But I don’t think there’s a full, causal relationship yet. So that is one big open question: How influential were investors versus consumers versus employees?
The second question is what were the more long-term effects? We’re looking at stock prices over a relatively short window. What does this do in the longer run to companies, and potentially also to their relationship to politicians or how partisan speech by companies influences politics, I think, are exciting areas for future research.
As more connected devices demand an increasing amount of bandwidth for tasks like teleworking and cloud computing, it will become extremely challenging to manage the finite amount of wireless spectrum available for all users to share.Engineers are employing artificial intelligence to dynamically manage the available wireless spectrum, with an eye toward reducing latency and boosting performance. But most AI methods for classifying and processing wireless signals are power-hungry and can’t operat
As more connected devices demand an increasing amount of bandwidth for tasks like teleworking and cloud computing, it will become extremely challenging to manage the finite amount of wireless spectrum available for all users to share.
Engineers are employing artificial intelligence to dynamically manage the available wireless spectrum, with an eye toward reducing latency and boosting performance. But most AI methods for classifying and processing wireless signals are power-hungry and can’t operate in real-time.
Now, MIT researchers have developed a novel AI hardware accelerator that is specifically designed for wireless signal processing. Their optical processor performs machine-learning computations at the speed of light, classifying wireless signals in a matter of nanoseconds.
The photonic chip is about 100 times faster than the best digital alternative, while converging to about 95 percent accuracy in signal classification. The new hardware accelerator is also scalable and flexible, so it could be used for a variety of high-performance computing applications. At the same time, it is smaller, lighter, cheaper, and more energy-efficient than digital AI hardware accelerators.
The device could be especially useful in future 6G wireless applications, such as cognitive radios that optimize data rates by adapting wireless modulation formats to the changing wireless environment.
By enabling an edge device to perform deep-learning computations in real-time, this new hardware accelerator could provide dramatic speedups in many applications beyond signal processing. For instance, it could help autonomous vehicles make split-second reactions to environmental changes or enable smart pacemakers to continuously monitor the health of a patient’s heart.
“There are many applications that would be enabled by edge devices that are capable of analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the beginning of something that could be quite impactful,” says Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science, principal investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper.
He is joined on the paper by lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc who is now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research appears today in Science Advances.
Light-speed processing
State-of-the-art digital AI accelerators for wireless signal processing convert the signal into an image and run it through a deep-learning model to classify it. While this approach is highly accurate, the computationally intensive nature of deep neural networks makes it infeasible for many time-sensitive applications.
Optical systems can accelerate deep neural networks by encoding and processing data using light, which is also less energy intensive than digital computing. But researchers have struggled to maximize the performance of general-purpose optical neural networks when used for signal processing, while ensuring the optical device is scalable.
By developing an optical neural network architecture specifically for signal processing, which they call a multiplicative analog frequency transform optical neural network (MAFT-ONN), the researchers tackled that problem head-on.
The MAFT-ONN addresses the problem of scalability by encoding all signal data and performing all machine-learning operations within what is known as the frequency domain — before the wireless signals are digitized.
The researchers designed their optical neural network to perform all linear and nonlinear operations in-line. Both types of operations are required for deep learning.
Thanks to this innovative design, they only need one MAFT-ONN device per layer for the entire optical neural network, as opposed to other methods that require one device for each individual computational unit, or “neuron.”
“We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” Davis says.
The researchers accomplish this using a technique called photoelectric multiplication, which dramatically boosts efficiency. It also allows them to create an optical neural network that can be readily scaled up with additional layers without requiring extra overhead.
Results in nanoseconds
MAFT-ONN takes a wireless signal as input, processes the signal data, and passes the information along for later operations the edge device performs. For instance, by classifying a signal’s modulation, MAFT-ONN would enable a device to automatically infer the type of signal to extract the data it carries.
One of the biggest challenges the researchers faced when designing MAFT-ONN was determining how to map the machine-learning computations to the optical hardware.
“We couldn’t just take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and figure out how to exploit the physics so it would perform the computations we wanted it to,” Davis says.
When they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which can quickly converge to more than 99 percent accuracy using multiple measurements. MAFT-ONN only required about 120 nanoseconds to perform entire process.
“The longer you measure, the higher accuracy you will get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to gain more accuracy,” Davis adds.
While state-of-the-art digital radio frequency devices can perform machine-learning inference in a microseconds, optics can do it in nanoseconds or even picoseconds.
Moving forward, the researchers want to employ what are known as multiplexing schemes so they could perform more computations and scale up the MAFT-ONN. They also want to extend their work into more complex deep learning architectures that could run transformer models or LLMs.
This work was funded, in part, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.
This image shows an artist’s interpretation of new optical processor for an edge device, developed by MIT researchers, that performs machine learning computations at the speed of light, classifying wireless signals in a matter of nanoseconds.
Photos by Stephanie Mitchell/Harvard Staff
Science & Tech
A step in fight against tick-borne disease
New molecular method differentiates sexes, reveals whether females have mated
Clea Simon
Harvard Correspondent
June 11, 2025
4 min read
Ticks pose a grave risk to public health, with nearly half a million cases of the tick-borne Lyme disease treated every year in the United States.
New molecular method differentiates sexes, reveals whether females have mated
Clea Simon
Harvard Correspondent
4 min read
Ticks pose a grave risk to public health, with nearly half a million cases of the tick-borne Lyme disease treated every year in the United States.
Young nymph and adult female ticks typically pose the greatest risk for transmitting infection to humans. But, researchers say, there is much that is unknown about the sexual biology of ticks, knowledge that would prove useful in control efforts.
A new paper published in the Journal of Medical Entomology marks a major stride forward, chronicling a groundbreaking molecular method that differentiates male and female blacklegged ticks (commonly called deer ticks) and also reveals whether these arachnids have mated.
Lyme is perhaps the best-known disease passed by ticks, but the bacterium behind that malady is just one of several associated with them, explained Isobel Ronai, a Life Sciences Research Foundation Post-doctoral Fellow of HHMI in the Department of Organismic and Evolutionary Biology and a primary author of the paper.
Citing other tick-borne diseases, such as babesiosis, Ronai pointed out that “ticks have a huge public health importance here in the United States in terms of the disease burden.”
“Ticks have a huge public health importance here in the United States in terms of the disease burden.”
“The number of cases of diseases transmitted by ticks and mosquitoes has increased significantly in the U.S. over the last 25 years, with tick-borne diseases now accounting for over 80 percent of all vector-borne disease cases reported each year,” said C. Ben Beard, principal deputy director of the CDC’s division of vector borne-diseases, who called for better “research aimed at better understanding tick reproductive biology.”
Ronai worked with her long-term collaborators at the University of Georgia, who had “a unique data set of tick genomes,” that is, the DNA sequence of blacklegged ticks from across the country. Together they developed a molecular test to determine whether individual ticks were male or female.
In addition to being able to sex the ticks, Ronai investigated “interesting results” in females collected in New York that had the marker for male DNA. By mating other ticks in the lab, she was able to determine that the marker could also be used to identify female ticks that had mated.
Isobel Ronai.
Ticks have a complex life cycle.
“They feed at multiple stages,” explained Ronai. “In mosquitoes only the adult stages take a blood meal, whereas in the ticks they feed at three stages throughout their life cycle.”
The ticks begin life as eggs, from which emerge larvae. Those larvae feed on a host before transitioning to their next stage, which is called a nymph. The nymphs also feed on a host.
“They take a blood meal and use that blood to progress to the adult stage,” Ronai said. “And then, at the adult stage, the female ticks feed to produce their egg clutch to start the next generation.”
“The overall sexual biology of the ticks is an area where we have a lot to learn at the molecular level.”
Isobel Ronai
Much is still unknown.
“I am very interested in our sexing assay being used to investigate whether there is any association between the sex of a larva or nymph and what hosts they’re feeding on,” said Ronai. From this behavior, we can determine “what potential microbes that cause disease male and female ticks are picking up from hosts and transmitting to humans.”
For her own research in the Extavour lab group at Harvard, the way forward is clear.
“I’m excited to explore what is happening with the biology at these immature stages,” said Ronai. “The overall sexual biology of the ticks is an area where we have a lot to learn at the molecular level.”
Then, added Ronai, “we can get to the stage of developing new control strategies for ticks that target them specifically.”
Trace metals such as iron or zinc that are stored in deep-sea sediments are lost forever to phytoplankton on the ocean surface. This is what geochemists believed for a long time about the cycle of micronutrients in seawater. Now, researchers at ETH Zurich have discovered that this is not the case.
Trace metals such as iron or zinc that are stored in deep-sea sediments are lost forever to phytoplankton on the ocean surface. This is what geochemists believed for a long time about the cycle of micronutrients in seawater. Now, researchers at ETH Zurich have discovered that this is not the case.
Art restoration takes steady hands and a discerning eye. For centuries, conservators have restored paintings by identifying areas needing repair, then mixing an exact shade to fill in one area at a time. Often, a painting can have thousands of tiny regions requiring individual attention. Restoring a single painting can take anywhere from a few weeks to over a decade.In recent years, digital restoration tools have opened a route to creating virtual representations of original, restored works. The
Art restoration takes steady hands and a discerning eye. For centuries, conservators have restored paintings by identifying areas needing repair, then mixing an exact shade to fill in one area at a time. Often, a painting can have thousands of tiny regions requiring individual attention. Restoring a single painting can take anywhere from a few weeks to over a decade.
In recent years, digital restoration tools have opened a route to creating virtual representations of original, restored works. These tools apply techniques of computer vision, image recognition, and color matching, to generate a “digitally restored” version of a painting relatively quickly.
Still, there has been no way to translate digital restorations directly onto an original work, until now. In a paper appearing today in the journal Nature, Alex Kachkine, a mechanical engineering graduate student at MIT, presents a new method he’s developed to physically apply a digital restoration directly onto an original painting.
The restoration is printed on a very thin polymer film, in the form of a mask that can be aligned and adhered to an original painting. It can also be easily removed. Kachkine says that a digital file of the mask can be stored and referred to by future conservators, to see exactly what changes were made to restore the original painting.
“Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine says. “And that’s never really been possible in conservation before.”
As a demonstration, he applied the method to a highly damaged 15th century oil painting. The method automatically identified 5,612 separate regions in need of repair, and filled in these regions using 57,314 different colors. The entire process, from start to finish, took 3.5 hours, which he estimates is about 66 times faster than traditional restoration methods.
Kachkine acknowledges that, as with any restoration project, there are ethical issues to consider, in terms of whether a restored version is an appropriate representation of an artist’s original style and intent. Any application of his new method, he says, should be done in consultation with conservators with knowledge of a painting’s history and origins.
“There is a lot of damaged art in storage that might never be seen,” Kachkine says. “Hopefully with this new method, there’s a chance we’ll see more art, which I would be delighted by.”
Digital connections
The new restoration process started as a side project. In 2021, as Kachkine made his way to MIT to start his PhD program in mechanical engineering, he drove up the East Coast and made a point to visit as many art galleries as he could along the way.
“I’ve been into art for a very long time now, since I was a kid,” says Kachkine, who restores paintings as a hobby, using traditional hand-painting techniques. As he toured galleries, he came to realize that the art on the walls is only a fraction of the works that galleries hold. Much of the art that galleries acquire is stored away because the works are aged or damaged, and take time to properly restore.
“Restoring a painting is fun, and it’s great to sit down and infill things and have a nice evening,” Kachkine says. “But that’s a very slow process.”
As he has learned, digital tools can significantly speed up the restoration process. Researchers have developed artificial intelligence algorithms that quickly comb through huge amounts of data. The algorithms learn connections within this visual data, which they apply to generate a digitally restored version of a particular painting, in a way that closely resembles the style of an artist or time period. However, such digital restorations are usually displayed virtually or printed as stand-alone works and cannot be directly applied to retouch original art.
“All this made me think: If we could just restore a painting digitally, and effect the results physically, that would resolve a lot of pain points and drawbacks of a conventional manual process,” Kachkine says.
“Align and restore”
For the new study, Kachkine developed a method to physically apply a digital restoration onto an original painting, using a 15th-century painting that he acquired when he first came to MIT. His new method involves first using traditional techniques to clean a painting and remove any past restoration efforts.
“This painting is almost 600 years old and has gone through conservation many times,” he says. “In this case there was a fair amount of overpainting, all of which has to be cleaned off to see what’s actually there to begin with.”
He scanned the cleaned painting, including the many regions where paint had faded or cracked. He then used existing artificial intelligence algorithms to analyze the scan and create a virtual version of what the painting likely looked like in its original state.
Then, Kachkine developed software that creates a map of regions on the original painting that require infilling, along with the exact colors needed to match the digitally restored version. This map is then translated into a physical, two-layer mask that is printed onto thin polymer-based films. The first layer is printed in color, while the second layer is printed in the exact same pattern, but in white.
“In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains. “If those two layers are misaligned, that’s very easy to see. So I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”
Kachkine used high-fidelity commercial inkjets to print the mask’s two layers, which he carefully aligned and overlaid by hand onto the original painting and adhered with a thin spray of conventional varnish. The printed films are made from materials that can be easily dissolved with conservation-grade solutions, in case conservators need to reveal the original, damaged work. The digital file of the mask can also be saved as a detailed record of what was restored.
For the painting that Kachkine used, the method was able to fill in thousands of losses in just a few hours. “A few years ago, I was restoring this baroque Italian painting with probably the same order magnitude of losses, and it took me nine months of part-time work,” he recalls. “The more losses there are, the better this method is.”
He estimates that the new method can be orders of magnitude faster than traditional, hand-painted approaches. If the method is adopted widely, he emphasizes that conservators should be involved at every step in the process, to ensure that the final work is in keeping with an artist’s style and intent.
“It will take a lot of deliberation about the ethical challenges involved at every stage in this process to see how can this be applied in a way that’s most consistent with conservation principles,” he says. “We’re setting up a framework for developing further methods. As others work on this, we’ll end up with methods that are more precise.”
This work was supported, in part, by the John O. and Katherine A. Lutz Memorial Fund. The research was carried out, in part, through the use of equipment and facilities at MIT.Nano, with additional support from the MIT Microsystems Technology Laboratories, the MIT Department of Mechanical Engineering, and the MIT Libraries.
Scans of the painting during various stages in its restoration. At left is the damaged piece, with the middle panel showing a map of the different kinds of damage present; green lines show full splits in the underlying panel support, thin red lines depict major paint craquelure, blue areas correspond to large paint losses, while pink regions show smaller defects like scratches. At right is the restored painting with the applied laminate mask.
Researchers at Weill Cornell Medicine and Memorial Sloan Kettering Cancer Center provide fresh insights about how cancers evolve when they metastasize – insights that could aid in developing strategies to improve the effectiveness of treatment.
Researchers at Weill Cornell Medicine and Memorial Sloan Kettering Cancer Center provide fresh insights about how cancers evolve when they metastasize – insights that could aid in developing strategies to improve the effectiveness of treatment.
Today, 2.2 billion people in the world lack access to safe drinking water. In the United States, more than 46 million people experience water insecurity, living with either no running water or water that is unsafe to drink. The increasing need for drinking water is stretching traditional resources such as rivers, lakes, and reservoirs.To improve access to safe and affordable drinking water, MIT engineers are tapping into an unconventional source: the air. The Earth’s atmosphere contains millions
Today, 2.2 billion people in the world lack access to safe drinking water. In the United States, more than 46 million people experience water insecurity, living with either no running water or water that is unsafe to drink. The increasing need for drinking water is stretching traditional resources such as rivers, lakes, and reservoirs.
To improve access to safe and affordable drinking water, MIT engineers are tapping into an unconventional source: the air. The Earth’s atmosphere contains millions of billions of gallons of water in the form of vapor. If this vapor can be efficiently captured and condensed, it could supply clean drinking water in places where traditional water resources are inaccessible.
With that goal in mind, the MIT team has developed and tested a new atmospheric water harvester and shown that it efficiently captures water vapor and produces safe drinking water across a range of relative humidities, including dry desert air.
The new device is a black, window-sized vertical panel, made from a water-absorbent hydrogel material, enclosed in a glass chamber coated with a cooling layer. The hydrogel resembles black bubble wrap, with small dome-shaped structures that swell when the hydrogel soaks up water vapor. When the captured vapor evaporates, the domes shrink back down in an origami-like transformation. The evaporated vapor then condenses on the the glass, where it can flow down and out through a tube, as clean and drinkable water.
The system runs entirely on its own, without a power source, unlike other designs that require batteries, solar panels, or electricity from the grid. The team ran the device for over a week in Death Valley, California — the driest region in North America. Even in very low-humidity conditions, the device squeezed drinking water from the air at rates of up to 160 milliliters (about two-thirds of a cup) per day.
The team estimates that multiple vertical panels, set up in a small array, could passively supply a household with drinking water, even in arid desert environments. What’s more, the system’s water production should increase with humidity, supplying drinking water in temperate and tropical climates.
“We have built a meter-scale device that we hope to deploy in resource-limited regions, where even a solar cell is not very accessible,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering and Civil and Environmental Engineering at MIT. “It’s a test of feasibility in scaling up this water harvesting technology. Now people can build it even larger, or make it into parallel panels, to supply drinking water to people and achieve real impact.”
Zhao and his colleagues present the details of the new water harvesting design in a paper appearing today in the journal Nature Water. The study’s lead author is former MIT postdoc “Will” Chang Liu, who is currently an assistant professor at the National University of Singapore (NUS). MIT co-authors include Xiao-Yun Yan, Shucong Li, and Bolei Deng, along with collaborators from multiple other institutions.
“Through our work with soft materials, one property we know very well is the way hydrogel is very good at absorbing water from air,” Zhao says.
Researchers are exploring a number of ways to harvest water vapor for drinking water. Among the most efficient so far are devices made from metal-organic frameworks, or MOFs — ultra-porous materials that have also been shown to capture water from dry desert air. But the MOFs do not swell or stretch when absorbing water, and are limited in vapor-carrying capacity.
Water from air
The group’s new hydrogel-based water harvester addresses another key problem in similar designs. Other groups have designed water harvesters out of micro- or nano-porous hydrogels. But the water produced from these designs can be salty, requiring additional filtering. Salt is a naturally absorbent material, and researchers embed salts — typically, lithium chloride — in hydrogel to increase the material’s water absorption. The drawback, however, is that this salt can leak out with the water when it is eventually collected.
The team’s new design significantly limits salt leakage. Within the hydrogel itself, they included an extra ingredient: glycerol, a liquid compound that naturally stabilizes salt, keeping it within the gel rather than letting it crystallize and leak out with the water. The hydrogel itself has a microstructure that lacks nanoscale pores, which further prevents salt from escaping the material. The salt levels in the water they collected were below the standard threshold for safe drinking water, and significantly below the levels produced by many other hydrogel-based designs.
In addition to tuning the hydrogel’s composition, the researchers made improvements to its form. Rather than keeping the gel as a flat sheet, they molded it into a pattern of small domes resembling bubble wrap, that act to increase the gel’s surface area, along with the amount of water vapor it can absorb.
The researchers fabricated a half-square-meter of hydrogel and encased the material in a window-like glass chamber. They coated the exterior of the chamber with a special polymer film, which helps to cool the glass and stimulates any water vapor in the hydrogel to evaporate and condense onto the glass. They installed a simple tubing system to collect the water as it flows down the glass.
In November 2023, the team traveled to Death Valley, California, and set up the device as a vertical panel. Over seven days, they took measurements as the hydrogel absorbed water vapor during the night (the time of day when water vapor in the desert is highest). In the daytime, with help from the sun, the harvested water evaporated out from the hydrogel and condensed onto the glass.
Over this period, the device worked across a range of humidities, from 21 to 88 percent, and produced between 57 and 161.5 milliliters of drinking water per day. Even in the driest conditions, the device harvested more water than other passive and some actively powered designs.
“This is just a proof-of-concept design, and there are a lot of things we can optimize,” Liu says. “For instance, we could have a multipanel design. And we’re working on a next generation of the material to further improve its intrinsic properties.”
“We imagine that you could one day deploy an array of these panels, and the footprint is very small because they are all vertical,” says Zhao, who has plans to further test the panels in many resource-limited regions. “Then you could have many panels together, collecting water all the time, at household scale.”
This work was supported, in part, by the MIT J-WAFS Water and Food Seed Grant, the MIT-Chinese University of Hong Kong collaborative research program, and the UM6P-MIT collaborative research program.
A close-up of a new origami-inspired hydrogel material, designed by MIT engineers, that swells to absorb water from the air. When water condenses out of the material to be collected, the individual hydrogel spheres shrink back down to capture more moisture.
The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get ou
The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.
This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.
While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.
In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.
The researchers were also able to determine the circumstances under which people choose each of those strategies.
“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.
Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.
Rational strategies
When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.
Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.
“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.
To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.
The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.
“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”
The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.
For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.
The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.
That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.
Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.
“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”
Human limitations
To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.
When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to counterfactual only if it thought its recall would be good enough to get the right answer — just as humans do.
“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”
By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.
The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.
In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — offering insights for building machines that think more like us.
Smartphone apps that track menstrual cycles are a ‘gold mine’ for consumer profiling, collecting information on everything from exercise, diet and medication to sexual preferences, hormone levels and contraception use.
This is according to a new report from the University of Cambridge’s Minderoo Centre for Technology and Democracy, which argues that the financial worth of this data is ‘vastly underestimated’ by users who supply profit-driven companies with highly intimate details in a market la
Smartphone apps that track menstrual cycles are a ‘gold mine’ for consumer profiling, collecting information on everything from exercise, diet and medication to sexual preferences, hormone levels and contraception use.
The report’s authors caution that cycle tracking app (CTA) data in the wrong hands could result in risks to job prospects, workplace monitoring, health insurance discrimination and cyberstalking – and limit access to abortion.
They call for better governance of the booming ‘femtech’ industry to protect users when their data is sold at scale, arguing that apps must provide clear consent options rather than all-or-nothing data collection, and urge public health bodies to launch alternatives to commercial CTAs.
“Menstrual cycle tracking apps are presented as empowering women and addressing the gender health gap,” said Dr Stefanie Felsberger, lead author of the report from Cambridge’s Minderoo Centre. “Yet the business model behind their services rests on commercial use, selling user data and insights to third parties for profit.”
“There are real and frightening privacy and safety risks to women as a result of the commodification of the data collected by cycle tracking app companies.”
As most cycle tracking apps are targeted at women aiming to get pregnant, the download data alone is of huge commercial value, say researchers, as – other than home buying – no life event is linked to such dramatic shifts in consumer behaviour.
In fact, data on pregnancy is believed to be over two hundred times more valuable than data on age, gender or location for targeted advertising. The report points out that period tracking could also be used to target women at different points in their cycle. For example, the oestrogen or ‘mating’ phase could see an increase in cosmetics adverts.
Just the three most popular apps had estimated global download figures of a quarter of a billion in 2024. So-called femtech – digital products focused on women’s health and wellbeing – is estimated to reach over US$60 billion (£45 billion) by 2027, with cycle tracking apps making up half of this market.
With such intense demand for period tracking, the report argues that the UK’s National Health Service (NHS) should develop its own transparent and trustworthy app to rival those from private companies, with apps allowing permission for data to be used in valid medical research.
“The UK is ideally positioned to solve the question of access to menstrual data for researchers, as well as privacy and data commodification concerns, by developing an NHS app to track menstrual cycles,” said Felsberger, who points out that Planned Parenthood in the US already has its own app, but the UK lacks an equivalent.
“Apps that are situated within public healthcare systems, and not driven primarily by profit, will mitigate privacy violations, provide much-needed data on reproductive health, and give people more agency over how their menstrual data is used.”
“The use of cycle tracking apps is at an all-time high,” said Prof Gina Neff, Executive Director of Cambridge’s Minderoo Centre. “Women deserve better than to have their menstrual tracking data treated as consumer data, but there is a different possible future.”
“Researchers could use this data to help answer questions about women’s health. Care providers could use this data for important information about their patients’ health. Women could get meaningful insights that they are searching for,” Neff said.
In the UK and EU, period tracking data is considered ‘special category’, as with that on genetics or ethnicity, and has more legal safeguarding. However, the report highlights how in the UK, apps designed for women's health have been used to charge women for illegally accessing abortion services
In the US, data about menstrual cycles has been collected by officials in an attempt to undermine abortion access. Despite this, data from CTAs are regulated simply as ‘general wellness’ and granted no special protections.
“Menstrual tracking data is being used to control people’s reproductive lives,” said Felsberger. “It should not be left in the hands of private companies.”
Investigations by media, non-profit, and consumer groups have revealed CTAs sharing data with third parties ranging from advertisers and data brokers to tech giants such as Facebook and Google.
The report cites work published earlier this year from Privacy International showing that, while the major CTA companies have updated their approach to data sharing, device information is still collected in the UK and US with “no meaningful consent”.
Despite data protection improvements, the report suggests that user information is still shared with third parties such as cloud-based delivery networks that move the data around, and outside developers contracted to handle app functionalities.
At the very least, commercial apps could include delete buttons, says Felsberger, allowing users to erase data in the app as well as the company servers, helping protect against situations – from legal to medical – where data could be used against them.
“Menstrual tracking in the US should be classed as medical data,” said Felsberger. “In the UK and EU, where this data is already afforded special category status, more focus needs to be placed on enforcing existing regulation.”
The report stresses the need to improve public awareness and digital literacy around period tracking. The researchers argue that schools should educate students on medical data apps and privacy, so young people are less vulnerable to health hoaxes.
The report ‘The High Stakes of Tracking Menstruation’ is authored by Dr Stefanie Felsberger with a foreword by Professor Gina Neff and published by the Minderoo Centre for Technology and Democracy (MCTD).
Cambridge researchers urge public health bodies like the NHS to provide trustworthy, research-driven alternatives to platforms driven by profit.
Women deserve better than to have their menstrual tracking data treated as consumer data
The University of Melbourne will be better able to track, protect and enhance the rich biological diversity of its campuses following completion of a long-running Biodiversity Baseline Data Project.
The University of Melbourne will be better able to track, protect and enhance the rich biological diversity of its campuses following completion of a long-running Biodiversity Baseline Data Project.
For many patients with schizophrenia, other psychiatric illnesses, or diseases such as hypertension and asthma, it can be difficult to take their medicine every day. To help overcome that challenge, MIT researchers have developed a pill that can be taken just once a week and gradually releases medication from within the stomach.In a phase 3 clinical trial conducted by MIT spinout Lyndra Therapeutics, the researchers used the once-a-week pill to deliver a widely used medication for managing the s
For many patients with schizophrenia, other psychiatric illnesses, or diseases such as hypertension and asthma, it can be difficult to take their medicine every day. To help overcome that challenge, MIT researchers have developed a pill that can be taken just once a week and gradually releases medication from within the stomach.
In a phase 3 clinical trial conducted by MIT spinout Lyndra Therapeutics, the researchers used the once-a-week pill to deliver a widely used medication for managing the symptoms of schizophrenia. They found that this treatment regimen maintained consistent levels of the drug in patients’ bodies and controlled their symptoms just as well as daily doses of the drug. The results are published today in Lancet Psychiatry.
“We’ve converted something that has to be taken once a day to once a week, orally, using a technology that can be adapted for a variety of medications,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, an associate member of the Broad Institute, and an author of the study. “The ability to provide a sustained level of drug for a prolonged period, in an easy-to-administer system, makes it easier to ensure patients are receiving their medication.”
Traverso’s lab began developing the ingestible capsule studied in this trial more than 10 years ago, as part of an ongoing effort to make medications easier for patients to take. The capsule is about the size of a multivitamin, and once swallowed, it expands into a star shape that helps it remain in the stomach until all of the drug is released.
Richard Scranton, chief medical officer of Lyndra Therapeutics, is the senior author of the paper, and Leslie Citrome, a clinical professor of psychiatry and behavioral sciences at New York Medical College School of Medicine, is the lead author. Nayana Nagaraj, medical director at Lyndra Therapeutics, and Todd Dumas, senior director of pharmacometrics at Certara, are also authors.
Sustained delivery
Over the past decade, Traverso’s lab has been working on a variety of capsules that can be swallowed and remain in the digestive tract for days or weeks, slowly releasing their drug payload. In 2016, his team reported the star-shaped device, which was then further developed by Lyndra for clinical trials in patients with schizophrenia.
The device contains six arms that can be folded in, allowing it to fit inside a capsule. The capsule dissolves when the device reaches the stomach, allowing the arms to spring out. Once the arms are extended, the device becomes too large to pass through the pylorus (the exit of the stomach), so it remains freely floating in the stomach as drugs are slowly released from the arms. After about a week, the arms break off on their own, and each segment exits the stomach and passes through the digestive tract.
For the clinical trials, the capsule was loaded with risperidone, a commonly prescribed medication used to treat schizophrenia. Most patients take the drug orally once a day. There are also injectable versions that can be given every two weeks, every month, or every two months, but they require administration by a health care provider and are not always acceptable to patients.
The MIT and Lyndra team chose to focus on schizophrenia in hopes that a drug regimen that could be administered less frequently, through oral delivery, could make treatment easier for patients and their caregivers.
“One of the areas of unmet need that was recognized early on is neuropsychiatric conditions, where the illness can limit or impair one’s ability to remember to take their medication,” Traverso says. “With that in mind, one of the conditions that has been a big focus has been schizophrenia.”
The phase 3 trial was coordinated by researchers at Lyndra and enrolled 83 patients at five different sites around the United States. Forty-five of those patients completed the full five weeks of the study, in which they took one risperidone-loaded capsule per week.
Throughout the study, the researchers measured the amount of drug in each patient’s bloodstream. Each week, they found a sharp increase on the day the pill was given, followed by a slow decline over the next week. The levels were all within the optimal range, and there was less variation over time than is seen when patients take a pill each day.
Effective treatment
Using an evaluation known as the Positive and Negative Syndrome Scale (PANSS), the researchers also found that the patients’ symptoms remained stable throughout the study.
“One of the biggest obstacles in the care of people with chronic illnesses in general is that medications are not taken consistently. This leads to worsening symptoms, and in the case of schizophrenia, potential relapse and hospitalization,” Citrome says. “Having the option to take medication by mouth once a week represents an important option that can assist with adherence for the many patients who would prefer oral medications versus injectable formulations.”
Side effects from the treatment were minimal, the researchers found. Some patients experienced mild acid reflux and constipation early in the study, but these did not last long. The results, showing effectiveness of the capsule and few side effects, represent a major milestone in this approach to drug delivery, Traverso says.
“This really demonstrates that what we had hypothesized a decade ago, which is that a single capsule providing a drug depot within the GI tract could be possible,” he says. “Here what you see is that the capsule can achieve the drug levels that were predicted, and also control symptoms in a sizeable cohort of patients with schizophrenia.”
The investigators now hope to complete larger phase 3 studies before applying for FDA approval of this delivery approach for risperidone. They are also preparing for phase 1 trials using this capsule to deliver other drugs, including contraceptives.
“We are delighted that this technology which started at MIT has reached the point of phase 3 clinical trials,” says Robert Langer, the David H. Koch Institute Professor at MIT, who was an author of the original study on the star capsule and is a co-founder of Lyndra Therapeutics.
The ingestible capsule is about the size of a multivitamin, and once swallowed, it expands into a star shape that helps it remain in the stomach until all of the drug is released.
Surgeon, best-selling author, and public health leader Atul Gawande delivers the Alumni Day keynote address. Niles Singer/Harvard Staff Photographer
Campus & Community
‘Who we are and what we stand for’
Abbie Barrett
Harvard Correspondent
June 10, 2025
6 min read
Amid Harvard Alumni Day celebration, speakers address challenges, share messages of strength and resolve
A collection of features and profiles covering Harvard University’s 374th Commencement.
Thousands of alumni from around the world gathered on campus Friday for Harvard Alumni Day — an annual event celebrating alumni of all Harvard Schools and class years and the collective strength of their communities. The day’s events, which coincided with Harvard and Radcliffe College reunions and other alumni programs across the University, drew a record 9,600-plus attendees this year. The festivities included musical performances, the presentation of the Harvard Medals, and a keynote address by renowned surgeon, best-selling author, and public health leader Atul Gawande, M.D. ’95, M.P.H. ’99.
The main program began with the traditional alumni parade from the Old Yard to Tercentenary Theatre, led by the chief marshal of alumni Dara Olmsted Silverstein ’00 and the two oldest alumni in attendance, Linda Cabot Black ’51 and Stanley Karson ’48, A.M. ’50.
Alumni file from the Old Yard to Tercentenary Theatre.
Veasey Conway/Harvard Staff Photographer
Stanley Karson ’48, A.M. ’50.
Veasey Conway/Harvard Staff Photographer
President Alan Garber greets Linda Cabot Black ’51.
Niles Singer/Harvard Staff Photographer
After Peter J. Koutoujian, M.P.A. ’03, the sheriff of Middlesex County, called the 155th annual meeting of the Harvard Alumni Association to order, HAA board president Moitri Chowdhury Savard ’93 took to the podium, referencing an appeal she made to the College Class of 2024 last spring: to consider the plural of the University’s motto, Veritas, and embrace Veritates— the ability to hold many truths simultaneously to connect across differences.
“Today I am even more convinced that we must strengthen this muscle to hold multiple truths and to coalesce around our many shared values, particularly freedom of thought and expression, and respect and kindness,” said Savard, who will be succeeded by incoming HAA President William Makris, Ed.M. ’00, on July 1.
Noting the unprecedented challenges the University has faced over the past year, she urged fellow alumni to continue “to be informed, principled ambassadors” of Harvard and higher education more broadly.
Veasey Conway/Harvard Staff Photographer
Sarah Karmon, executive director of the HAA and associate vice president of alumni affairs and development, spoke next, expressing her gratitude for the steadfast support and contributions of Harvard’s alumni volunteers. She also gave special thanks to those who led reunion planning and fundraising efforts for their classes this year, noting the Class of 2005’s record-setting attendance for a 20th reunion.
Karmon closed by paying tribute to Jack Reardon ’60, associate vice president of University relations, who will retire at the end of the month after more than 60 years of service to Harvard. “Every person in this theater today has benefited from his leadership, his wisdom, and his deep commitment to his alma mater,” she said.
“The pursuit of truth — of Veritas — is perpetual,” Garber said. “We are unceasing in our efforts to champion our motto.”
Niles Singer/Harvard Staff Photographer
President Alan M. Garber ’76, Ph.D. ’82, who was met with a standing ovation, spoke to the challenges of a difficult year, laying out how the University is working to address legitimate criticisms while defending itself against misrepresentations and retaliatory actions from the federal government.
“Only one thing about Harvard has persisted over 388 years, and actually it’s not our name; it’s our embrace of scrutiny, advancement, and renewal,” said Garber, noting that the University is built on the idea of continual improvement to create a better institution and world for successive generations. “The pursuit of truth — of Veritas — is perpetual,” Garber said. “We are unceasing in our efforts to champion our motto.”
He also remarked on the expressions of support the University has received from alumni, as well as from people with no affiliation to Harvard who have championed the University in its fight to preserve academic freedom.
Garber ended his speech with a short valediction: “May Veritas lift us up and light our way, especially in dark times, enabling Harvard and our fellow universities to persevere and succeed in building a better future — not perfect, but more perfect than the present.”
Danilo “Dacha” Thurber ’25 and Sava Thurber ’27.
Niles Singer/Harvard Staff Photographer
Members of the 50th reunion committee.
Veasey Conway/Harvard Staff Photographer
Peter J. Koutoujian, M.P.A. ’03.
Veasey Conway/Harvard Staff Photographer
Chief marshal of alumni Dara Olmsted Silverstein ’00.
Niles Singer/Harvard Staff Photographer
Outgoing HAA board president Moitri Chowdhury Savard ’93.
Niles Singer/Harvard Staff Photographer
Paul J. Finnegan ’75, M.B.A. ’82.
Veasey Conway/Harvard Staff Photographer
Carolyn Hughes ’54.
Niles Singer/Harvard Staff Photographer
Kathy Delaney-Smith.
Veasey Conway/Harvard Staff Photographer
David Johnston ’63.
Veasey Conway/Harvard Staff Photographer
Sarah Karmon, HAA executive director and associate vice president of alumni affairs and development.
Niles Singer/Harvard Staff Photographer
Following Garber’s speech, brothers Danilo “Dacha” Thurber ’25 and Sava Thurber ’27 performed two songs — a traditional Polish folksong called “Tesknota Za Ojczyzna Marsz” and “Etudes-Caprices Op. 18, No. 4” by Polish composer Henryk Wieniawski — which they noted “highlight the importance of an international voice in a place which we are so fortunate to call home.”
Garber then presented this year’s Harvard Medals to Kathy Delaney-Smith, Paul J. Finnegan ’75, M.B.A. ’82, Carolyn Hughes ’54, and David Johnston ’63, who were recognized for their extraordinary service to the University.
Veasey Conway/Harvard Staff Photographer
In his keynote address, Gawande, who served as assistant administrator for global health at USAID from 2022 to early 2025, called out recent federal actions for undermining public health and harming Harvard and the country.
The University is facing existential questions, said Gawande, a general and endocrine surgeon at Brigham and Women’s Hospital and a professor at Harvard Medical School and Harvard T.H. Chan School of Public Health. He learned just in the previous week that funding had been cut for his own research center’s efforts to reduce surgical patient mortality.
“The discussions have been hard, but the answer was ultimately easy,” he said, expressing his gratitude to Garber and the Corporation for standing strong against demands that threaten the foundation of teaching, scholarship, and discovery. Navigating an uncertain future, he said, “is far easier when we know who we are and what we stand for.”
The main program ended with a performance of “Fair Harvard” by alumni members of the Harvard Din & Tonics, Harvard Glee Club, Harvard-Radcliffe Collegium Musicum, Harvard University Choir, Kuumba Singers of Harvard College, Radcliffe Choral Society, and Radcliffe Pitches. Savard told those in attendance to save the date for next year’s Harvard Alumni Day — June 5, 2026 — before the crowd dispersed to celebrate in the Yard with lawn games, photo opportunities, and food and beverage trucks.
Sixteen Harvard Clubs around the world also hosted local celebrations of Harvard Alumni Day for those who could not attend in person. Later in the afternoon, many Shared Interest Groups hosted meetup events on campus and in Cambridge, including a get-together at Charlie’s Kitchen hosted by Harvardwood. Alumni also had the opportunity to attend several Harvard Alumni Day symposia sessions, which included faculty panels on Harvard’s global impact, the ongoing work of the Salata Institute for Climate and Sustainability, and the Harvard Data Science Initiative’s efforts to ensure AI serves society in meaningful and ethical ways.
As the frequency and severity of extreme weather events grow, it may become increasingly necessary to employ a bolder approach to climate change, warned Emily A. Carter, the Gerhard R. Andlinger Professor in Energy and the Environment at Princeton University. Carter made her case for why the energy transition is no longer enough in the face of climate change while speaking at the MIT Energy Initiative (MITEI) Presents: Advancing the Energy Transition seminar on the MIT campus.“If all we do is ta
As the frequency and severity of extreme weather events grow, it may become increasingly necessary to employ a bolder approach to climate change, warned Emily A. Carter, the Gerhard R. Andlinger Professor in Energy and the Environment at Princeton University. Carter made her case for why the energy transition is no longer enough in the face of climate change while speaking at the MIT Energy Initiative (MITEI) Presents: Advancing the Energy Transition seminar on the MIT campus.
“If all we do is take care of what we did in the past — but we don’t change what we do in the future — then we’re still going to be left with very serious problems,” she said. Our approach to climate change mitigation must comprise transformation, intervention, and adaption strategies, said Carter.
Transitioning to a decarbonized electricity system is one piece of the puzzle. Growing amounts of solar and wind energy — along with nuclear, hydropower, and geothermal — are slowly transforming the energy electricity landscape, but Carter noted that there are new technologies farther down the pipeline.
“Advanced geothermal may come on in the next couple of decades. Fusion will only really start to play a role later in the century, but could provide firm electricity such that we can start to decommission nuclear,” said Carter, who is also a senior strategic advisor and associate laboratory director at the Department of Energy’s Princeton Plasma Physics Laboratory.
Taking this a step further, Carter outlined how this carbon-free electricity should then be used to electrify everything we can. She highlighted the industrial sector as a critical area for transformation: “The energy transition is about transitioning off of fossil fuels. If you look at the manufacturing industries, they are driven by fossil fuels right now. They are driven by fossil fuel-driven thermal processes.” Carter noted that thermal energy is much less efficient than electricity and highlighted electricity-driven strategies that could replace heat in manufacturing, such as electrolysis, plasmas, light-emitting diodes (LEDs) for photocatalysis, and joule heating.
The transportation sector is also a key area for electrification, Carter said. While electric vehicles have become increasingly common in recent years, heavy-duty transportation is not as easily electrified. The solution? “Carbon-neutral fuels for heavy-duty aviation and shipping,” she said, emphasizing that these fuels will need to become part of the circular economy. “We know that when we burn those fuels, they’re going to produce CO2 [carbon dioxide] again. They need to come from a source of CO2 that is not fossil-based.”
The next step is intervention in the form of carbon dioxide removal, which then necessitates methods of storage and utilization, according to Carter. “There’s a lot of talk about building large numbers of pipelines to capture the CO2 — from fossil fuel-driven power plants, cement plants, steel plants, all sorts of industrial places that emit CO2 — and then piping it and storing it in underground aquifers,” she explained. Offshore pipelines are much more expensive than those on land, but can mitigate public concerns over their safety. Europe is exclusively focusing their efforts offshore for this very reason, and the same could be true for the United States, Carter said.
Once carbon dioxide is captured, commercial utilization may provide economic leverage to accelerate sequestration, even if only a few gigatons are used per year, Carter noted. Through mineralization, CO2 can be converted into carbonates, which could be used in building materials such as concrete and road-paving materials.
There is another form of intervention that Carter currently views as a last resort: solar geoengineering, sometimes known as solar radiation management or SRM. In 1991, Mount Pinatubo in the Philippines erupted and released sulfur dioxide into the stratosphere, which caused a temporary cooling of the Earth by approximately 0.5 degree Celsius for over a year. SRM seeks to recreate that cooling effect by injecting particles into the atmosphere that reflect sunlight. According to Carter, there are three main strategies: stratospheric aerosol injection, cirrus cloud thinning (thinning clouds to let more infrared radiation emitted by the earth escape to space), and marine cloud brightening (brightening clouds with sea salt so they reflect more light).
“My view is, I hope we don't ever have to do it, but I sure think we should understand what would happen in case somebody else just decides to do it. It’s a global security issue,” said Carter. “In principle, it’s not so difficult technologically, so we’d like to really understand and to be able to predict what would happen if that happened.”
With any technology, stakeholder and community engagement is essential for deployment, Carter said. She emphasized the importance of both respectfully listening to concerns and thoroughly addressing them, stating, “Hopefully, there’s enough information given to assuage their fears. We have to gain the trust of people before any deployment can be considered.”
A crucial component of this trust starts with the responsibility of the scientific community to be transparent and critique each other’s work, Carter said. “Skepticism is good. You should have to prove your proof of principle.”
MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The series will continue in fall 2025. For more information on this and additional events, visit the MITEI website.
Emily Carter (right), the Gerhard R. Andlinger Professor in Energy and the Environment at Princeton University, explained how climate change mitigation must include transformation, intervention, and adaptation strategies. William Green, director of the MIT Energy Initiative, moderated the discussion.
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. Howev
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. However, recent work has found that state-of-the-art LLMs struggle with complex logistical and mathematical reasoning, as well as problems with multiple constraints, like trip planning, where they’ve been found to provide viable solutions 4 percent or less of the time, even with additional tools and application programming interfaces (APIs).
Subsequently, a research team from MIT and the MIT-IBM Watson AI Lab reframed the issue to see if they could increase the success rate of LLM solutions for complex problems. “We believe a lot of these planning problems are naturally a combinatorial optimization problem,” where you need to satisfy several constraints in a certifiable way, says Chuchu Fan, associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the Laboratory for Information and Decision Systems (LIDS). She is also a researcher in the MIT-IBM Watson AI Lab. Her team applies machine learning, control theory, and formal methods to develop safe and verifiable control systems for robotics, autonomous systems, controllers, and human-machine interactions.
Noting the transferable nature of their work for travel planning, the group sought to create a user-friendly framework that can act as an AI travel broker to help develop realistic, logical, and complete travel plans. To achieve this, the researchers combined common LLMs with algorithms and a complete satisfiability solver. Solvers are mathematical tools that rigorously check if criteria can be met and how, but they require complex computer programming for use. This makes them natural companions to LLMs for problems like these, where users want help planning in a timely manner, without the need for programming knowledge or research into travel options. Further, if a user’s constraint cannot be met, the new technique can identify and articulate where the issue lies and propose alternative measures to the user, who can then choose to accept, reject, or modify them until a valid plan is formulated, if one exists.
“Different complexities of travel planning are something everyone will have to deal with at some point. There are different needs, requirements, constraints, and real-world information that you can collect,” says Fan. “Our idea is not to ask LLMs to propose a travel plan. Instead, an LLM here is acting as a translator to translate this natural language description of the problem into a problem that a solver can handle [and then provide that to the user],” says Fan.
Co-authoring a paper on the work with Fan are Yang Zhang of MIT-IBM Watson AI Lab, AeroAstro graduate student Yilun Hao, and graduate student Yongchao Chen of MIT LIDS and Harvard University. This work was recently presented at the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.
Breaking down the solver
Math tends to be domain-specific. For example, in natural language processing, LLMs perform regressions to predict the next token, a.k.a. “word,” in a series to analyze or create a document. This works well for generalizing diverse human inputs. LLMs alone, however, wouldn’t work for formal verification applications, like in aerospace or cybersecurity, where circuit connections and constraint tasks need to be complete and proven, otherwise loopholes and vulnerabilities can sneak by and cause critical safety issues. Here, solvers excel, but they need fixed formatting inputs and struggle with unsatisfiable queries. A hybrid technique, however, provides an opportunity to develop solutions for complex problems, like trip planning, in a way that’s intuitive for everyday people.
“The solver is really the key here, because when we develop these algorithms, we know exactly how the problem is being solved as an optimization problem,” says Fan. Specifically, the research group used a solver called satisfiability modulo theories (SMT), which determines whether a formula can be satisfied. “With this particular solver, it’s not just doing optimization. It’s doing reasoning over a lot of different algorithms there to understand whether the planning problem is possible or not to solve. That’s a pretty significant thing in travel planning. It’s not a very traditional mathematical optimization problem because people come up with all these limitations, constraints, restrictions,” notes Fan.
Translation in action
The “travel agent” works in four steps that can be repeated, as needed. The researchers used GPT-4, Claude-3, or Mistral-Large as the method’s LLM. First, the LLM parses a user’s requested travel plan prompt into planning steps, noting preferences for budget, hotels, transportation, destinations, attractions, restaurants, and trip duration in days, as well as any other user prescriptions. Those steps are then converted into executable Python code (with a natural language annotation for each of the constraints), which calls APIs like CitySearch, FlightSearch, etc. to collect data, and the SMT solver to begin executing the steps laid out in the constraint satisfaction problem. If a sound and complete solution can be found, the solver outputs the result to the LLM, which then provides a coherent itinerary to the user.
If one or more constraints cannot be met, the framework begins looking for an alternative. The solver outputs code identifying the conflicting constraints (with its corresponding annotation) that the LLM then provides to the user with a potential remedy. The user can then decide how to proceed, until a solution (or the maximum number of iterations) is reached.
Generalizable and robust planning
The researchers tested their method using the aforementioned LLMs against other baselines: GPT-4 by itself, OpenAI o1-preview by itself, GPT-4 with a tool to collect information, and a search algorithm that optimizes for total cost. Using the TravelPlanner dataset, which includes data for viable plans, the team looked at multiple performance metrics: how frequently a method could deliver a solution, if the solution satisfied commonsense criteria like not visiting two cities in one day, the method’s ability to meet one or more constraints, and a final pass rate indicating that it could meet all constraints. The new technique generally achieved over a 90 percent pass rate, compared to 10 percent or lower for the baselines. The team also explored the addition of a JSON representation within the query step, which further made it easier for the method to provide solutions with 84.4-98.9 percent pass rates.
The MIT-IBM team posed additional challenges for their method. They looked at how important each component of their solution was — such as removing human feedback or the solver — and how that affected plan adjustments to unsatisfiable queries within 10 or 20 iterations using a new dataset they created called UnsatChristmas, which includes unseen constraints, and a modified version of TravelPlanner. On average, the MIT-IBM group’s framework achieved 78.6 and 85 percent success, which rises to 81.6 and 91.7 percent with additional plan modification rounds. The researchers analyzed how well it handled new, unseen constraints and paraphrased query-step and step-code prompts. In both cases, it performed very well, especially with an 86.7 percent pass rate for the paraphrasing trial.
Lastly, the MIT-IBM researchers applied their framework to other domains with tasks like block picking, task allocation, the traveling salesman problem, and warehouse. Here, the method must select numbered, colored blocks and maximize its score; optimize robot task assignment for different scenarios; plan trips minimizing distance traveled; and robot task completion and optimization.
“I think this is a very strong and innovative framework that can save a lot of time for humans, and also, it’s a very novel combination of the LLM and the solver,” says Hao.
This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.
Traveling requires considerations for location, cost and availability of hotels, transportation, restaurants, and more. A new method from the MIT-IBM Watson AI Lab combines a large language model and a solver to assist with this frequently encountered problem.
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. Howev
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. However, recent work has found that state-of-the-art LLMs struggle with complex logistical and mathematical reasoning, as well as problems with multiple constraints, like trip planning, where they’ve been found to provide viable solutions 4 percent or less of the time, even with additional tools and application programming interfaces (APIs).
Subsequently, a research team from MIT and the MIT-IBM Watson AI Lab reframed the issue to see if they could increase the success rate of LLM solutions for complex problems. “We believe a lot of these planning problems are naturally a combinatorial optimization problem,” where you need to satisfy several constraints in a certifiable way, says Chuchu Fan, associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the Laboratory for Information and Decision Systems (LIDS). She is also a researcher in the MIT-IBM Watson AI Lab. Her team applies machine learning, control theory, and formal methods to develop safe and verifiable control systems for robotics, autonomous systems, controllers, and human-machine interactions.
Noting the transferable nature of their work for travel planning, the group sought to create a user-friendly framework that can act as an AI travel broker to help develop realistic, logical, and complete travel plans. To achieve this, the researchers combined common LLMs with algorithms and a complete satisfiability solver. Solvers are mathematical tools that rigorously check if criteria can be met and how, but they require complex computer programming for use. This makes them natural companions to LLMs for problems like these, where users want help planning in a timely manner, without the need for programming knowledge or research into travel options. Further, if a user’s constraint cannot be met, the new technique can identify and articulate where the issue lies and propose alternative measures to the user, who can then choose to accept, reject, or modify them until a valid plan is formulated, if one exists.
“Different complexities of travel planning are something everyone will have to deal with at some point. There are different needs, requirements, constraints, and real-world information that you can collect,” says Fan. “Our idea is not to ask LLMs to propose a travel plan. Instead, an LLM here is acting as a translator to translate this natural language description of the problem into a problem that a solver can handle [and then provide that to the user],” says Fan.
Co-authoring a paper on the work with Fan are Yang Zhang of MIT-IBM Watson AI Lab, AeroAstro graduate student Yilun Hao, and graduate student Yongchao Chen of MIT LIDS and Harvard University. This work was recently presented at the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.
Breaking down the solver
Math tends to be domain-specific. For example, in natural language processing, LLMs perform regressions to predict the next token, a.k.a. “word,” in a series to analyze or create a document. This works well for generalizing diverse human inputs. LLMs alone, however, wouldn’t work for formal verification applications, like in aerospace or cybersecurity, where circuit connections and constraint tasks need to be complete and proven, otherwise loopholes and vulnerabilities can sneak by and cause critical safety issues. Here, solvers excel, but they need fixed formatting inputs and struggle with unsatisfiable queries. A hybrid technique, however, provides an opportunity to develop solutions for complex problems, like trip planning, in a way that’s intuitive for everyday people.
“The solver is really the key here, because when we develop these algorithms, we know exactly how the problem is being solved as an optimization problem,” says Fan. Specifically, the research group used a solver called satisfiability modulo theories (SMT), which determines whether a formula can be satisfied. “With this particular solver, it’s not just doing optimization. It’s doing reasoning over a lot of different algorithms there to understand whether the planning problem is possible or not to solve. That’s a pretty significant thing in travel planning. It’s not a very traditional mathematical optimization problem because people come up with all these limitations, constraints, restrictions,” notes Fan.
Translation in action
The “travel agent” works in four steps that can be repeated, as needed. The researchers used GPT-4, Claude-3, or Mistral-Large as the method’s LLM. First, the LLM parses a user’s requested travel plan prompt into planning steps, noting preferences for budget, hotels, transportation, destinations, attractions, restaurants, and trip duration in days, as well as any other user prescriptions. Those steps are then converted into executable Python code (with a natural language annotation for each of the constraints), which calls APIs like CitySearch, FlightSearch, etc. to collect data, and the SMT solver to begin executing the steps laid out in the constraint satisfaction problem. If a sound and complete solution can be found, the solver outputs the result to the LLM, which then provides a coherent itinerary to the user.
If one or more constraints cannot be met, the framework begins looking for an alternative. The solver outputs code identifying the conflicting constraints (with its corresponding annotation) that the LLM then provides to the user with a potential remedy. The user can then decide how to proceed, until a solution (or the maximum number of iterations) is reached.
Generalizable and robust planning
The researchers tested their method using the aforementioned LLMs against other baselines: GPT-4 by itself, OpenAI o1-preview by itself, GPT-4 with a tool to collect information, and a search algorithm that optimizes for total cost. Using the TravelPlanner dataset, which includes data for viable plans, the team looked at multiple performance metrics: how frequently a method could deliver a solution, if the solution satisfied commonsense criteria like not visiting two cities in one day, the method’s ability to meet one or more constraints, and a final pass rate indicating that it could meet all constraints. The new technique generally achieved over a 90 percent pass rate, compared to 10 percent or lower for the baselines. The team also explored the addition of a JSON representation within the query step, which further made it easier for the method to provide solutions with 84.4-98.9 percent pass rates.
The MIT-IBM team posed additional challenges for their method. They looked at how important each component of their solution was — such as removing human feedback or the solver — and how that affected plan adjustments to unsatisfiable queries within 10 or 20 iterations using a new dataset they created called UnsatChristmas, which includes unseen constraints, and a modified version of TravelPlanner. On average, the MIT-IBM group’s framework achieved 78.6 and 85 percent success, which rises to 81.6 and 91.7 percent with additional plan modification rounds. The researchers analyzed how well it handled new, unseen constraints and paraphrased query-step and step-code prompts. In both cases, it performed very well, especially with an 86.7 percent pass rate for the paraphrasing trial.
Lastly, the MIT-IBM researchers applied their framework to other domains with tasks like block picking, task allocation, the traveling salesman problem, and warehouse. Here, the method must select numbered, colored blocks and maximize its score; optimize robot task assignment for different scenarios; plan trips minimizing distance traveled; and robot task completion and optimization.
“I think this is a very strong and innovative framework that can save a lot of time for humans, and also, it’s a very novel combination of the LLM and the solver,” says Hao.
This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.
Traveling requires considerations for location, cost and availability of hotels, transportation, restaurants, and more. A new method from the MIT-IBM Watson AI Lab combines a large language model and a solver to assist with this frequently encountered problem.
Enrico Fermi.Photo illustration by Liz Zonarich/Harvard Staff
Science & Tech
Still waiting
Sy Boles
Harvard Staff Writer
June 10, 2025
5 min read
75 years after Fermi’s paradox, are we any closer to finding alien life?
It was a simple question asked over lunch in 1950. Enrico Fermi, the Nobel Prize-winning physicist who helped usher in the atomic age, was dining with colleagues at L
75 years after Fermi’s paradox, are we any closer to finding alien life?
It was a simple question asked over lunch in 1950. Enrico Fermi, the Nobel Prize-winning physicist who helped usher in the atomic age, was dining with colleagues at Los Alamos, New Mexico, when the conversation turned to extraterrestrial life. Given the vastness of the universe and the statistical likelihood of other intelligent civilizations, Fermi wondered, “Where is everybody?”
Seventy-five years later, David Charbonneau, a professor of astronomy at the Center for Astrophysics | Harvard & Smithsonian, says we’re closer to an answer.
When Fermi posed his famous paradox, Charbonneau said, we hadn’t identified a single planet beyond our solar system. The 1995 discovery of the first exoplanet allowed scientists to break the paradox into smaller, more solvable questions: How many stars are there? How many of those stars have planets? What fraction of those planets are Earth-like? What fraction of Earth-like planets support life? And finally, what fraction of that life is intelligent?
“We have made tremendous progress on those questions,” said Charbonneau, who co-chaired the National Academies of Sciences, Engineering, and Medicine’s 2018 Committee on Exoplanet Science Strategy. “We now know that one in every four stars, at least, has a planet that is the same size as the Earth and is rocky, and is the same temperature as the Earth, so it’s what we would call a habitable-zone planet. Those are very secure conclusions.”
The next step is identifying biosignatures — chemicals in a planet’s atmosphere that could only be there because of biological processes. Charbonneau says that the necessary evidence faces a major technological hurdle: It requires far more data than our current instruments can provide.
Recognizing that challenge, the National Academies’ Committee for a Decadal Survey on Astronomy and Astrophysics 2020, on which Charbonneau served as a panel member, recommended the development of the Habitable Worlds Observatory, a space telescope designed to hunt for chemical signs of life on other planets. The HWO, if it were built and launched, would image at least 25 potentially habitable worlds. The project remains tentative.
There’s still the question of just how common life, let alone intelligent life, really is. It’s possible, Charbonneau said, that if you take any habitable-zone planet, add water, oxygen, nitrogen, and phosphorus, and give it about a billion years, life will develop. Or you could have those very same conditions, and it would all remain stubbornly lifeless. You only have to look at the first habitable planet to have a much better idea how common life is.
“If you look at the first one and there isn’t life, you’ve already learned, from a statistical perspective, that it’s not a guarantee that life forms. And then you have to think logarithmically. You have to think, maybe it’s one in 1,000 or maybe it’s one in a billion, or maybe it’s one in a trillion. And all those possibilities basically would mean there’s no life that we can interact with.”
Avi Loeb, Frank B. Baird Jr. Professor of Science at Harvard, says the search for extraterrestrial life should expand beyond traditional approaches. Loeb is the founder of the Galileo Project, which studies both unidentified aerial phenomena spotted here on Earth and physical objects that may have come from other solar systems.
The project is named for the Italian astronomer who was persecuted in the 17th century for arguing the Copernican theory that the Earth was not the center of the universe. Proof of billions of habitable planets in our galaxy alone is a reminder that we’re not as unique as we think we are, Loeb says. “The message from nature is, don’t be presumptuous, you are not privileged.”
Avi Loeb.
Harvard file photo
David Charbonneau.
Niles Singer/Harvard Staff Photographer
Loeb made headlines in 2018 when he suggested that ‘Oumuamua, the first known interstellar object to pass through our solar system, could be an alien lightsail or debris from an extraterrestrial ship. Despite pushback against the idea, Loeb says we shouldn’t brush anomalies under the carpet: We should at least get the data to find out for certain. He thinks that Fermi was doing himself a disservice by wondering idly about whether there were aliens, like someone who complains of being lonely but won’t try to meet new people.
“It’s the most romantic question on Earth,” Loeb said. “Do we have a partner out there?”
For Charbonneau, the chances of finding that partner are slim. Even under ideal circumstances — if our nearest interstellar neighbor, Proxima Centauri, hosted intelligent life with radio technology — sending a single message back and forth once would take the better part of a decade.
There’s also the chance that the aliens are less interested in us than we are in them.
“If you look around on the Earth, there are a lot of organisms, some would say intelligent organisms, that are not interested in developing technology, and they’re also maybe not interested in communicating,” Charbonneau said. “We humans love to communicate, and we love to connect, and maybe that’s just not a property of life: Maybe that’s really a property of humans.”
Research that crosses the traditional boundaries of academic disciplines, and boundaries between academia, industry, and government, is increasingly widespread, and has sometimes led to the spawning of significant new disciplines. But Munther Dahleh, a professor of electrical engineering and computer science at MIT, says that such multidisciplinary and interdisciplinary work often suffers from a number of shortcomings and handicaps compared to more traditionally focused disciplinary work.But inc
Research that crosses the traditional boundaries of academic disciplines, and boundaries between academia, industry, and government, is increasingly widespread, and has sometimes led to the spawning of significant new disciplines. But Munther Dahleh, a professor of electrical engineering and computer science at MIT, says that such multidisciplinary and interdisciplinary work often suffers from a number of shortcomings and handicaps compared to more traditionally focused disciplinary work.
But increasingly, he says, the profound challenges that face us in the modern world — including climate change, biodiversity loss, how to control and regulate artificial intelligence systems, and the identification and control of pandemics — require such meshing of expertise from very different areas, including engineering, policy, economics, and data analysis. That realization is what guided him, a decade ago, in the creation of MIT’s pioneering Institute for Data, Systems and Society (IDSS), aiming to foster a more deeply integrated and lasting set of collaborations than the usual temporary and ad hoc associations that occur for such work.
Dahleh has now written a book detailing the process of analyzing the landscape of existing disciplinary divisions at MIT and conceiving of a way to create a structure aimed at breaking down some of those barriers in a lasting and meaningful way, in order to bring about this new institute. The book, “Data, Systems, and Society: Harnessing AI for Societal Good,” was published this March by Cambridge University Press.
The book, Dahleh says, is his attempt “to describe our thinking that led us to the vision of the institute. What was the driving vision behind it?” It is aimed at a number of different audiences, he says, but in particular, “I’m targeting students who are coming to do research that they want to address societal challenges of different types, but utilizing AI and data science. How should they be thinking about these problems?”
A key concept that has guided the structure of the institute is something he refers to as “the triangle.” This refers to the interaction of three components: physical systems, people interacting with those physical systems, and then regulation and policy regarding those systems. Each of these affects, and is affected by, the others in various ways, he explains. “You get a complex interaction among these three components, and then there is data on all these pieces. Data is sort of like a circle that sits in the middle of this triangle and connects all these pieces,” he says.
When tackling any big, complex problem, he suggests, it is useful to think in terms of this triangle. “If you’re tackling a societal problem, it’s very important to understand the impact of your solution on society, on the people, and the role of people in the success of your system,” he says. Often, he says, “solutions and technology have actually marginalized certain groups of people and have ignored them. So the big message is always to think about the interaction between these components as you think about how to solve problems.”
As a specific example, he cites the Covid-19 pandemic. That was a perfect example of a big societal problem, he says, and illustrates the three sides of the triangle: there’s the biology, which was little understood at first and was subject to intensive research efforts; there was the contagion effect, having to do with social behavior and interactions among people; and there was the decision-making by political leaders and institutions, in terms of shutting down schools and companies or requiring masks, and so on. “The complex problem we faced was the interaction of all these components happening in real-time, when the data wasn’t all available,” he says.
Making a decision, for example shutting schools or businesses, based on controlling the spread of the disease, had immediate effects on economics and social well-being and health and education, “so we had to weigh all these things back into the formula,” he says. “The triangle came alive for us during the pandemic.” As a result, IDSS “became a convening place, partly because of all the different aspects of the problem that we were interested in.”
Examples of such interactions abound, he says. Social media and e-commerce platforms are another case of “systems built for people, and they have a regulation aspect, and they fit into the same story if you’re trying to understand misinformation or the monitoring of misinformation.”
The book presents many examples of ethical issues in AI, stressing that they must be handled with great care. He cites self-driving cars as an example, where programming decisions in dangerous situations can appear ethical but lead to negative economic and humanitarian outcomes. For instance, while most Americans support the idea that a car should sacrifice its driver rather than kill an innocent person, they wouldn’t buy such a car. This reluctance lowers adoption rates and ultimately increases casualties.
In the book, he explains the difference, as he sees it, between the concept of “transdisciplinary” versus typical cross-disciplinary or interdisciplinary research. “They all have different roles, and they have been successful in different ways,” he says. The key is that most such efforts tend to be transitory, and that can limit their societal impact. The fact is that even if people from different departments work together on projects, they lack a structure of shared journals, conferences, common spaces and infrastructure, and a sense of community. Creating an academic entity in the form of IDSS that explicitly crosses these boundaries in a fixed and lasting way was an attempt to address that lack. “It was primarily about creating a culture for people to think about all these components at the same time.”
He hastens to add that of course such interactions were already happening at MIT, “but we didn’t have one place where all the students are all interacting with all of these principles at the same time.” In the IDSS doctoral program, for instance, there are 12 required core courses — half of them from statistics and optimization theory and computation, and half from the social sciences and humanities.
Dahleh stepped down from the leadership of IDSS two years ago to return to teaching and to continue his research. But as he reflected on the work of that institute and his role in bringing it into being, he realized that unlike his own academic research, in which every step along the way is carefully documented in published papers, “I haven’t left a trail” to document the creation of the institute and the thinking behind it. “Nobody knows what we thought about, how we thought about it, how we built it.” Now, with this book, they do.
The book, he says, is “kind of leading people into how all of this came together, in hindsight. I want to have people read this and sort of understand it from a historical perspective, how something like this happened, and I did my best to make it as understandable and simple as I could.”
In his new book, Munther Dahleh explains the difference, as he sees it, between the concept of “transdisciplinary” versus typical cross-disciplinary or interdisciplinary research.
Health
What your brain score says about your body
Mass General Brigham Communications
June 10, 2025
4 min read
Simple tool can be used to identify risk factors for cancer and heart disease too, says new study
A “scorecard” designed to assess a person’s risk of developing brain-related conditions works similarly for heart disease and the three most common types of cancer, according to a new Mass G
Simple tool can be used to identify risk factors for cancer and heart disease too, says new study
A “scorecard” designed to assess a person’s risk of developing brain-related conditions works similarly for heart disease and the three most common types of cancer, according to a new Mass General Brigham study published in Family Practice.
The McCance Brain Care Score, developed at Mass General Brigham, is a list designed to assess modifiable risk factors that influence brain health. The scorecard also serves as a practical framework to help individuals identify meaningful, achievable lifestyle changes that support brain — and possibly systemic — health. Previous studies showed that a higher score, indicating better brain care, associates with a lower risk of stroke, dementia, and late-life depression.
“While the McCance Brain Care Score was originally developed to address modifiable risk factors for brain diseases, we have also found it’s associated with the incidence of cardiovascular disease and common cancers,” said senior author Sanjula Singh of the McCance Center for Brain Health at Massachusetts General Hospital and Harvard Medical School. “These findings reinforce the idea that brain disease, heart disease, and cancer share common risk factors and that by taking better care of your brain, you may also be supporting the health of your heart and body as a whole simultaneously.”
“These findings reinforce the idea that brain disease, heart disease, and cancer share common risk factors.”
Neurological diseases such as stroke, dementia, and late-life depression are often driven by a combination of modifiable risk factors. Similarly, cardiovascular diseases — including ischemic heart disease, stroke, and heart failure — and the three most common cancers worldwide (lung, colorectal, and breast cancer) share many of these risk factors. At least 80 percent of cardiovascular disease cases and 50 percent of cancer cases are attributable to modifiable behaviors such as poor nutrition, physical inactivity, smoking, excessive alcohol use, elevated blood pressure, cholesterol, and blood sugar, as well as psychosocial factors like stress and social isolation.
Given this overlap, researchers used data from the UK Biobank to analyze health outcomes in 416,370 individuals aged 40 to 69 years. They found that a 5-point higher Brain Care Score at baseline was associated with a 43 percent lower risk of developing cardiovascular disease over a median follow-up of 12½ years. For cancer, a 5-point increase in Brain Care Score was associated with a 31 percent lower incidence of lung, colorectal, and breast cancer.
The authors acknowledged several limitations. First, while the findings reveal strong associations, the study does not establish causality — although prior evidence suggests that some individual components of the Brain Care Score, such as smoking, physical activity, and blood pressure control, have causal links to specific outcomes. Second, because the UK Biobank includes only participants aged 40 to 69 at enrollment, the findings may not generalize to younger or older populations. Lastly, while the score provides a broad, accessible measure of brain health, it is not designed as a disease-specific predictive model.
“The goal of the McCance Brain Care Score is to empower individuals to take small, meaningful steps toward better brain health,” said lead author Jasper Senff, who conducted this work as a postdoctoral fellow in the Singh Lab within the Brain Care Labs at Massachusetts General Hospital. “Taking better care of your brain by making progress on your Brain Care Score may also be linked to broader health benefits, including a lower likelihood of heart disease and cancer.”
“Primary care providers around the world are under growing pressure to manage complex health needs within limited time,” said Singh. “A simple, easy-to-use tool like the McCance Brain Care Score holds enormous promise — not only for supporting brain health, but also for helping to address modifiable risk factors for a broader range of chronic diseases in a practical, time-efficient way.”
Funding for this study was provided by the National Institutes of Health and American Heart Association.
Dustin Tingley.Photo by Grace DuVal
Science & Tech
Numbers tell one story about climate change. People tell another.
Alvin Powell
Harvard Staff Writer
June 10, 2025
7 min read
Policy expert Dustin Tingley studies transition to renewable energy, knows from work, life how economic shifts rattle through communities
A series focused on the personal side of H
Numbers tell one story about climate change. People tell another.
Alvin Powell
Harvard Staff Writer
7 min read
Policy expert Dustin Tingley studies transition to renewable energy, knows from work, life how economic shifts rattle through communities
A series focused on the personal side of Harvard research and teaching.
Over the last decade, Dustin Tingley has reconsidered his beliefs about expertise.
As a public policy expert, Tingley has devised quantitative ways to understand the messy problems and sometimes messier datasets that abound in political economy, international trade, and political science. In recent years, he has turned his attention to the transition to renewable energy amid the quickening pace of climate change.
As he has done so, Tingley has found himself shifting focus from datasets that tell a story in numbers to stories told by people experiencing changing economic circumstances and climate-stressed times.
“I came to the realization that there was so much expertise about this topic that was not in academia,” said Tingley, the Thomas D. Cabot Professor of Public Policy at the Harvard Kennedy School and professor of government in the Faculty of Arts and Sciences. “You go where the knowledge is, and the knowledge is in the field. The knowledge is in the lived experience of communities and people.”
Tingley hasn’t abandoned data and its power to illuminate in ways beyond the reach of anecdote and personal experience. But he’s also come to recognize that data alone doesn’t tell the full story. Missing in high-level, numbers-driven discussions of things like jobs to be lost in the fossil fuel industry are the community-level impacts of shifts in industries that underpin not just household finances, but also local and regional economies. This includes everything from concerns around infrastructure to downtown retail zones to the ability of local governments to fund things like public safety and schools.
“It’s easy to think about this in terms of fossil-fuel jobs, which are important to focus on, but what that misses is that the local economic tax base depends on it,” said Tingley. “That was not on my radar at all, but it was one of the first things that people would raise. Local economic development officials or county commissioners would show, for example, a picture of their local football stadium. I didn’t have an appreciation of how embedded and suffused all this was.”
The work culminated in 2023’s “Uncertain Futures: How to Unlock the Climate Impasse.” The volume, co-authored with Alexander Gazmararian, relies on interviews, community meetings, and other forums to present an on-the-ground view of climate change and the coming energy transition. Focusing on individuals, business owners, and community leaders, it seeks lessons from those likely to feel the transition most deeply.
“I definitely brought my quantitative knowledge and expertise to bear,” Tingley said. “But the more qualitative interview, the listening and learning, was necessary because my slice of academia did not have the bigger picture.”
Tingley’s shift in perspective was perhaps preordained. Though his work in international relations has largely followed the numbers, a shift to communities in transition echoes the changes that rocked the places where he grew up.
Tingley in front of his family’s farm house as a child.
His family’s financial situation improved over time, but they struggled when he was young. They lived in a rural part of North Carolina where furniture-making and tobacco-growing were important industries, both of which would encounter significant challenges. The region’s furniture industry declined due to foreign competition and outsourcing, while tobacco has long been under assault because of health concerns.
He also recalls visits to his father’s family in West Virginia, driving through coal country, with blasted mountaintops and mile-long coal trains. The economic impact of the industry’s decadeslong decline was apparent even to his young eyes, as he watched weathered shacks and cars on blocks in front yards pass by his window.
Tingley has always had an interest in the environment, which he says was piqued by his family’s move to New Jersey in middle school, with the Garden State’s juxtaposition of oil refineries and urban sprawl, fertile farmland and natural Pine Barrens.
But his first academic passion was international affairs. That interest developed in the years around the Cold War’s end, when his North Carolina elementary school had him crouching under his desk during nuclear drills, the Soviet Union collapsed, and later, in high school in the mid-1990s, when protests on the streets highlighted globalization’s inequities.
“I started to become more aware of the world, learning more systematically about war and conflict and the Cold War and international trade — you read stories about protests — I realized there’s a big world out there,” Tingley said.
In the late 1990s and early 2000s, Tingley studied at the University of Rochester, earning a political science degree and a minor in math. After teaching at a private school in New York for two years, he headed to graduate school at Princeton, where he became more deeply involved in research. The work blended statistics and political science, and he began developing new statistical tools when existing ones weren’t up to handling the complex and often unruly data sets.
“You go where the knowledge is, and the knowledge is in the field. The knowledge is in the lived experience of communities and people.”
“He’s full of energy and interested in a lot of different things,” said Kosuke Imai, one of Tingley’s professors at Princeton who today is professor of government and of statistics at Harvard. “We worked on these statistical methods, but he went on to other research about international trade and how that affects domestic actors. Now he’s on to climate change. He’s quite versatile in terms of being able to understand today’s need.”
Imai said Tingley’s energy is infectious and part of what makes him a good leader. At Princeton, Tingley was captain of the department’s softball team — Imai recalls being pulled from the whiteboard to the diamond on occasion.
At Harvard, Tingley has taken on a more formal leadership role as deputy vice provost for advances in learning, where he has worked to create educational tools and resources for students and co-chaired a study on climate education.
“He has an energy that is contagious to his colleagues, friends, and collaborators, plus he works extremely hard,” Imai said. “He’s also down-to-earth, doesn’t assume anything, and is a very straightforward person.”
After graduating in 2010, Tingley came to Harvard as an assistant professor of government where, over the next few years, he became increasingly interested in climate change. After gaining tenure in 2015, he began to look for climate-related problems to explore.
His research eventually touched on the U.S. Trade Adjustment Assistance program, which provides financial support to those who have lost their jobs due to international competition. He began to wonder whether workers displaced by the clean-energy transition might benefit from something similar.
“I thought, all these fossil-fuel people are going to lose their jobs, what are we going to do about them?” Tingley said. “I ran polling on an idea that I called ‘climate adjustment assistance,’ basically asking, ‘Would you support helping fossil-fuel workers transition?’ and bipartisan majorities supported it.”
Since “Uncertain Futures,” Tingley has continued his work. He is part of a cluster of the Salata Institute for Climate and Sustainability that carries on themes in his book. The work uses community surveys, public hearings, and in-person interviews to gather experiences and opinions on the best way forward.
In August 2024, Tingley and pre-doctoral fellow Ana Martinez authored a report, “Federal Land, Leasing, Energy, and Local Public Finances,” examining how differently the nation handles proceeds from fossil fuel versus wind- and solar-generating facilities.
Proceeds from fossil-fuel extraction on federal land are shared with nearby towns and states and provide important revenue for them. But the report noted that when it comes to renewable energy, the federal government keeps all the money.
The pair conducted a nationwide poll, finding that significant majorities of voters in both parties support sending renewable revenue to local communities, a step that might help build acceptance in the most affected places.
“Just convincing someone that there’s a problem is a totally different thing from putting in place a solution that they can afford,” Tingley said. “People vote with their pocketbooks.”
Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that?A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upsh
Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that?
A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case.
“We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context,” says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study’s results. “AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied.”
People’s reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on “algorithm aversion” found that people are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on “algorithm appreciation” found that people preferred advice from AI, compared to advice from humans.
To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people’s preferences for AI versus humans. The researchers tested whether the data supported their proposed “Capability–Personalization Framework” — the idea that in a given context, both the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans.
Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct “decision contexts” — for instance, whether or not participants would feel comfortable with AI being used in cancer diagnoses. The analysis confirmed that the Capability–Personalization Framework indeed helps account for people’s preferences.
“The meta-analysis supported our theoretical framework,” Lu says. “Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization. People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal.”
He adds: “The key idea here is that high perceived capability alone does not guarantee AI appreciation. Personalization matters too.”
For example, people tend to favor AI when it comes to detecting fraud or sorting large datasets — areas where AI’s abilities exceed those of humans in speed and scale, and personalization is not required. But they are more resistant to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is better able to recognize their unique circumstances.
“People have a fundamental desire to see themselves as unique and distinct from other people,” Lu says. “AI is often viewed as impersonal and operating in a rote manner. Even if the AI is trained on a wealth of data, people feel AI can’t grasp their personal situations. They want a human recruiter, a human doctor who can see them as distinct from other people.”
Context also matters: From tangibility to unemployment
The study also uncovered other factors that influence individuals’ preferences for AI. For instance, AI appreciation is more pronounced for tangible robots than for intangible algorithms.
Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced.
“It makes intuitive sense,” Lu says. “If you worry about being replaced by AI, you’re less likely to embrace it.”
Lu is continuing to examine people’s complex and evolving attitudes toward AI. While he does not view the current meta-analysis as the last word on the matter, he hopes the Capability–Personalization Framework offers a valuable lens for understanding how people evaluate AI across different contexts.
“We’re not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people’s preferences for AI versus humans across a wide range of studies,” Lu concludes.
In addition to Lu, the paper’s co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.
The research was supported, in part, by grants to Qin and Wu from the National Natural Science Foundation of China.
A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case.
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — betwe
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.
“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”
Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.
“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”
The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.
Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.
Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.
“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”
Hundreds of students, faculty, and staff turned out on Tuesday, May 6, for a community gathering hosted by Evelyn Wang, vice president for energy and climate, to learn about the Climate Project at MIT, make connections, and exchange ideas.
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — betwe
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.
“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”
Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.
“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”
The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.
Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.
Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.
“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”
Hundreds of students, faculty, and staff turned out on Tuesday, May 6, for a community gathering hosted by Evelyn Wang, vice president for energy and climate, to learn about the Climate Project at MIT, make connections, and exchange ideas.
A clinical trial in patients with advanced breast cancer has found the use of liquid biopsy blood tests for early detection of a treatment-resistant mutation, followed by a new type of treatment, substantially extends the period of tumor control.
A clinical trial in patients with advanced breast cancer has found the use of liquid biopsy blood tests for early detection of a treatment-resistant mutation, followed by a new type of treatment, substantially extends the period of tumor control.
CNA, 28 May 2025The Straits Times, 29 May 2025, The Big Story, pA4The Business Times, 29 May 2025, p4Lianhe Zaobao, 29 May 2025, Singapore, p5Berita Harian, 29 May 2025, p5Tamil Murasu, 29 May 2025, p2
In celebration of its 15th anniversary, the Centre for Governance and Sustainability (CGS) at NUS Business School launched a commemorative e-book titled 15 Lessons in Corporate Sustainability and Governance on 16 May 2025, at an event organised by the school’s alumni office NUS BIZAlum to mark the Business School’s 60th anniversary.The e-book is a compilation of commentaries on corporate governance and sustainability, many of which were previously published by local and international media platf
The e-book is a compilation of commentaries on corporate governance and sustainability, many of which were previously published by local and international media platforms and updated for the commemorative publication. It aims to inspire individuals and organisations to embrace the cause through essays on the mindset, implementation steps, communication approach, and talent investments required to integrate sustainability into a business.
Aside from marking CGS’ milestone anniversary, the e-book embodies CGS Director Professor Lawrence Loh’s vision of the centre as a source of practical, understandable information. He noted that the emphasis by the media outlets on clarity and conciseness when reviewing the commentaries for publication has helped ensure that the content remains accessible to business practitioners, students, and policymakers. Prof Loh added that he envisions the e-book as a useful educational tool for his courses and lectures.
“We don’t want to be seen as an ivory tower. We want to be seen as a lighthouse that guides practice and policy,” he said, telling NUS News that his hope is for CGS to eventually go beyond pointing the way to actively leading companies to best practices, like a tugboat guiding ships into safe harbour.
Research with policy impact
Housed under the NUS Business School, CGS was first established as the Centre for Governance, Institutions and Organisations in 2010 with a team of five and an initial focus on corporate governance. Since the beginning, its work has centred on applied research, such as studies to identify baselines or trends among companies, using an integrative approach that leverages industry outreach and collaborations to ensure that its research yields useful insights for managers and policymakers.
Its current projects span corporate governance, corporate sustainability, and the interface between the two, yielding findings that are useful not only to the projects’ funding partners but also to consultants and policymakers who cite them in recommendations and proposals.
All this is accomplished by a lean team of 10, comprising Prof Loh, seven researchers, and two administrative staff, with the help of visiting PhD students on one-year appointments and student interns, research assistants, and Business School faculty members who contribute on an ad hoc basis. Their work has far-reaching impact, garnering the centre over 500 media mentions annually in recent years in local and international media outlets such as AFP, BBC, CNN, China Daily, Nikkei Asia, The Washington Post, and Xinhua News.
Prior to taking over from Professor Chang Sea-Jin as CGS director in 2015, Prof Loh played a pivotal role in adding corporate sustainability to the centre’s scope through his work on the first national study on sustainability reporting in Singapore as a research affiliate in 2014. The centre strengthened its corporate sustainability research pillar over the years with more studies and assessments, and it was relaunched as the Centre for Governance and Sustainability in 2021 to better represent its agenda.
In those early years, the topic was not on the radar for most of the business world, and Prof Loh recalls fielding basic questions like “What is sustainability?” from business leaders.
But CGS’ foresight was rewarded in 2015 when the Paris Agreement on climate change was signed and interest in corporate sustainability skyrocketed. CGS stood out as a reliable source that was already doing work in the field, and its reputation was further bolstered by official recognition and partnerships with regulators and business players, such as the Monetary Authority of Singapore (MAS), the Singapore Exchange (SGX), and luxury group Kering.
For example, in 2013, MAS appointed CGS and the Singapore Institute of Directors as the Domestic Ranking Body to assess and rank Singapore’s listed companies on corporate governance for the ASEAN Capital Markets Forum. The central bank also uses CGS data in its annual financial stability review.
In 2019, CGS was appointed by SGX as an independent assessor to conduct biennial assessments of Singapore-listed companies’ sustainability reporting, and the Singapore Exchange Regulation website lists the Singapore Governance and Transparency Index, a key CGS project, among its sustainability resources for companies.
Since May 2024, CGS has been studying climate- and nature-related transition strategies among companies across Asia Pacific under a three-year collaboration with luxury group Kering, which owns brands such as Gucci and Bottega Veneta.
“Everything we do in CGS is evidence-based,” said Prof Loh, noting that the positions taken by CGS are not based on personal convictions or speculation but derived from data assembled with the help of industry partners and regulators. “Sustainability is not airy-fairy. It has to be factual.”
Prof Loh translates the findings into meaningful insights for businesses through his written commentaries, analysis provided to the media, and speeches at various business events and webinars. These havehelped CGS gain mindshare in the business community and promote an evidence-based approach to the sustainability conversation.
Looking back, looking forward
In his 10 years as CGS director, Prof Loh has observed awareness of corporate sustainability evolve significantly, and he believes CGS has helped to drive market behaviour towards greater transparency and sustainability by providing the data to support policy changes and new regulations.
“What we have found over the years to be very interesting is this idea, that ‘What you regulate is what you get.’ As we track company disclosures, we found that whenever there’s a new regulation like a requirement or listing rule, there will be a corresponding change in market behaviour,” he shared.
Business leaders’ questions too have progressed from “Why do we need to care about sustainability?” to “How do we do it?” More recently, the questions have revolved around how much to invest into it, which Prof Loh says reflects a new perspective of sustainability as capital expenditure – a long-term investment – instead of operating expenditure.
It is an encouraging sign, especially since CGS has been working to get businesses to consider economics alongside the familiar environmental, social, and governance (ESG) framework in their sustainability activities. Prof Loh has shared about this new EESG approach over the past year through opinion pieces and masterclasses like the one he delivered at the e-book launch titled “From ESG to EESG: Incorporating Economics into Sustainability,” in which he built the business case for sustainability efforts that go beyond responsibility and risk management to yield returns.
Even as the work of establishing EESG as the new norm has just begun, Prof Loh looks forward to CGS leading the charge to tackle what he sees as the final frontier of sustainability – fostering a well-being economy that supports human and environmental well-being rather than prioritising economic growth at all cost.
In July 2024, CGS signed a memorandum of understanding with TPC (Tsao Pao Chee) Group to establish the Well-Being and EESG Alliance, which will develop strategies and metrics to guide organisations in embedding well-being into their sustainability efforts.
Said Prof Loh: “In sustainability, looking at climate change is only half the story. An emerging area is human health, and well-being may be the final thing that sustainability is all about – well-being that is holistically defined, not just physically or physiologically, but also psychologically.”
Influencing such major change may seem ambitious for the small research centre, but Prof Loh’s hope is that CGS will not only be recognised for its successes but also rewarded with more resources to scale up its work in its next chapter. After all, the past 15 years have shown that the centre can punch well above its weight, even with limited staff and funding.
“We have reached a certain point where we just need that boost into the next curve, and an investment at this point would help us to ride on the goodwill from our work,” he said. “I think we have not realised our full potential yet.”
CNA, 28 May 2025Channel 5 News, 28 May 20258world Online, 28 May 2025CNA938, 28 May 2025The Straits Times, 29 May 2025, The Big Story, pA6Berita Harian, 29 May 2025, p4Tamil Murasu, 29 May 2025, Front Page
Professor Dariusz Wójcik from the Department of Geography at the NUS Faculty of Arts and Social Sciences has been conferred the prestigious Murchison Award by the Royal Geographical Society with the Institute of British Geographers (RGS-IBG) for geographical excellence.The RGS-IBG noted that the award is in recognition of his body of publications, which has “forged a whole new branch of geographical science” and has established geography “as a key discipline for the study of money and finance”.
Professor Dariusz Wójcik from the Department of Geography at the NUS Faculty of Arts and Social Sciences has been conferred the prestigious Murchison Award by the Royal Geographical Society with the Institute of British Geographers (RGS-IBG) for geographical excellence.
The RGS-IBG noted that the award is in recognition of his body of publications, which has “forged a whole new branch of geographical science” and has established geography “as a key discipline for the study of money and finance”. They cited his recently published book The Atlas of Finance – the first-ever book-sized collection of maps and visuals dedicated to financial geography – as “a definitive contribution to our understanding of the geographical structure of the global financial system.” Past recipients of the award include the notable Dr Roger Tomlinson, a distinguished geographer and creator of the world’s first Geographic Information System.
Founded in 1830 and based in London, the RGS-IBG is a renowned institution dedicated to advancing geographical science through supporting and promoting geography and its practitioners through research, education and public engagement. With 16,000 members currently, the organisation is the largest and most active scholarly geographical society.
Prof Wójcik was presented the award by the Society’s President, Professor Dame Jane Francis, at a ceremony in London on 2 June. On receiving the award, he shared, “I feel honoured and humbled. I hope that this occasion serves as encouragement to others who study finance and space, within and beyond financial geography.”
A financial and economic geographer, Prof Wójcik’s research combines insights from geography, economics, political economy, sociology and anthropology. He has published 8 books and over 120 articles and book chapters in leading journals and edited volumes. Before coming to NUS, he served as a Professor of Economic Geography at the University of Oxford's School of Geography and the Environment until 2023. He is also the founder and inaugural chair of the Global Network on Financial Geography (FinGeo), a research network that fosters collaboration among scholars interested in the spatial aspects of finance, and the founding Editor-in-Chief of the Finance & Space journal.
Click here to learn more about Prof Wójcik in a RGS-IBG interview.
The Edison Histotripsy System was purchased thanks to a generous donation to the University of Cambridge from Hong Kong-based philanthropist Sir Ka-shing Li, a longstanding supporter of cancer research at the University.
Histotripsy uses pulsed sound waves to create ‘bubble clouds’ from gases present in targeted tissue. These bubble clouds form and collapse in microseconds, creating mechanical forces able to destroy tissue at cellular and sub-cellular levels while avoiding ionising energy of ra
The Edison Histotripsy System was purchased thanks to a generous donation to the University of Cambridge from Hong Kong-based philanthropist Sir Ka-shing Li, a longstanding supporter of cancer research at the University.
Histotripsy uses pulsed sound waves to create ‘bubble clouds’ from gases present in targeted tissue. These bubble clouds form and collapse in microseconds, creating mechanical forces able to destroy tissue at cellular and sub-cellular levels while avoiding ionising energy of radiation, heat damage from thermal treatments, or the need for surgery.
Treatment is delivered via a single short session – potentially taking no longer than 30 minutes – with limited or no pain, a quick recovery, and can be performed as a day case. The speed of delivery has the potential to reduce cancer treatment times, avoid disease progression and improve cancer survival.
The system will be demonstrated today by Dr Teik Choon See, Consultant Interventional Radiologist at Cambridge University Hospitals NHS Foundation Trust (CUH). Guests will include Solina Chau, Director of the Li Ka Shing Foundation, and Baroness Merron, Parliamentary Under-Secretary of State at the Department of Health and Social Care (DHSC).
The machine, manufactured by HistoSonics, is expected to be fully installed at CUH later this year, where it will be used initially to treat patients with primary and secondary liver tumours before being expanded to treat tumours in other organs.
Previously, 23 patients from Europe were recruited in a histotripsy clinical trial that was completed in 2022. So far, over 1,500 patients worldwide have received treatment using histotripsy, mainly in the United States following approval by the US Food and Drug Administration in late 2023. The machine at Cambridge will be the first in the UK and Europe to treat patients as part of their clinical care pathway, outside the trial setting
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, said: “Through his longstanding support of cancer research at Cambridge, Sir Ka-shing Li continues to make a significant impact on outcomes for cancer patients. Cutting-edge technology such as this histotripsy machine allows Cambridge to remain at the forefront of understanding and treating cancer, a position we aim to strengthen further with Cambridge Cancer Research Hospital.”
Roland Sinker, Chief Executive of CUH said: “Histotripsy is an exciting new technology that will make a huge difference to patients. By offering this non-invasive, more targeted treatment we can care for more people as outpatients and free up time for surgeons to treat more complex cases. The faster recovery times mean patients will be able to return to their normal lives more quickly, which will also reduce pressure on hospital beds, helping us ensure that patients are able to receive the right treatment at the right time. We are delighted to be receiving this new state of the art machine.”
Fiona, who has lived with cancer for over two decades, is Co-Chair of the Patient Advisory Group for Cambridge Cancer Research Hospital, and has been involved in planning and designing the new hospital, said: “This is seriously good news. A new, non-invasive option to treat these cancers is very welcome indeed. For patients for whom ordinary surgery is no longer an option, this could make all the difference.”
Health and Social Care Secretary Wes Streeting granted authorisation for controlled early access to the device via an unmet clinical need authorisation. Available through the UK’s Innovative Devices Access Pathway programme, this bypasses red tape to accelerate lengthy authorisation stages, so NHS patients benefit from it years earlier than planned.
Wes Streeting said: “Bureaucracy has become a handbrake on ambition, stopping innovation in its tracks and holding our health service back. But through our Plan for Change, we are slashing red tape, so game-changing new treatments reach the NHS front line quicker – transforming healthcare.
“Regulation is vital to protect patients. However, as the pace of innovation ramps up, our processes must be more agile to help speed the shift from analogue to digital. Our common-sense approach to regulation will streamline approval processes so countless more patients are liberated from life-limiting conditions.”
Last year, an £11million donation was made in honour of Sir Ka-shing Li to support the now-renamed Li Ka Shing Early Cancer Institute. Sir Ka-shing Li has previously made generous donations to support cancer research at the University, including in 2007 to the Li Ka Shing Centre, which houses the CRUK Cambridge Institute.
Cutting-edge technology such as the histotripsy machine will enable Cambridge Cancer Research Hospital, a partnership with Cambridge University Hospitals NHS Foundation Trust and the University of Cambridge, to change the story of cancer. The University and Addenbrooke's Charitable Trust (ACT) are fundraising for the new hospital, which will transform how we diagnose and treat cancer. The hospital will treat patients across the East of England, but the research that takes place there promises to change the lives of cancer patients across the UK and beyond. Find out more here.
NHS patients at Addenbrooke’s Hospital, Cambridge, will become the first in the UK and Europe to undergo incisionless ultrasound surgery using a cutting-edge ‘histotripsy machine’ as part of their cancer care.
Through his longstanding support of cancer research at Cambridge, Sir Ka-shing Li continues to make a significant impact on outcomes for cancer patients
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.Auxins, particularly IAA, play a ce
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.
Auxins, particularly IAA, play a central role in regulating key plant processes such as cell division, elongation, root and shoot development, and response to environmental cues like light, heat, and drought. External factors like light affect how auxin moves within the plant, temperature influences how much is produced, and a lack of water can disrupt hormone balance. When plants cannot effectively regulate auxins, they may not grow well, adapt to changing conditions, or produce as much food.
Existing IAA detection methods, such as liquid chromatography, require taking plant samples from the plant — which harms or removes part of it. Conventional methods also measure the effects of IAA rather than detecting it directly, and cannot be used universally across different plant types. In addition, since IAA are small molecules that cannot be easily tracked in real time, biosensors that contain fluorescent proteins need to be inserted into the plant’s genome to measure auxin, making it emit a fluorescent signal for live imaging.
SMART’s newly developed nanosensor enables direct, real-time tracking of auxin levels in living plants with high precision. The sensor uses near infrared imaging to monitor IAA fluctuations non-invasively across tissues like leaves, roots, and cotyledons, and it is capable of bypassing chlorophyll interference to ensure highly reliable readings even in densely pigmented tissues. The technology does not require genetic modification and can be integrated with existing agricultural systems — offering a scalable precision tool to advance both crop optimization and fundamental plant physiology research.
By providing real-time, precise measurements of auxin, the sensor empowers farmers with earlier and more accurate insights into plant health. With these insights and comprehensive data, farmers can make smarter, data-driven decisions on irrigation, nutrient delivery, and pruning, tailored to the plant’s actual needs — ultimately improving crop growth, boosting stress resilience, and increasing yields.
“We need new technologies to address the problems of food insecurity and climate change worldwide. Auxin is a central growth signal within living plants, and this work gives us a way to tap it to give new information to farmers and researchers,” says Michael Strano, co-lead principal investigator at DiSTAP, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper. “The applications are many, including early detection of plant stress, allowing for timely interventions to safeguard crops. For urban and indoor farms, where light, water, and nutrients are already tightly controlled, this sensor can be a valuable tool in fine-tuning growth conditions with even greater precision to optimize yield and sustainability.”
The research team documented the nanosensor’s development in a paper titled, “A Near-Infrared Fluorescent Nanosensor for Direct and Real-Time Measurement of Indole-3-Acetic Acid in Plants,” published in the journal ACS Nano. The sensor comprises single-walled carbon nanotubes wrapped in a specially designed polymer, which enables it to detect IAA through changes in near infrared fluorescence intensity. Successfully tested across multiple species, including Arabidopsis, Nicotiana benthamiana, choy sum, and spinach, the nanosensor can map IAA responses under various environmental conditions such as shade, low light, and heat stress.
“This sensor builds on DiSTAP’s ongoing work in nanotechnology and the CoPhMoRe technique, which has already been used to develop other sensors that can detect important plant compounds such as gibberellins and hydrogen peroxide. By adapting this approach for IAA, we’re adding to our inventory of novel, precise, and nondestructive tools for monitoring plant health. Eventually, these sensors can be multiplexed, or combined, to monitor a spectrum of plant growth markers for more complete insights into plant physiology,” saysDuc Thinh Khong, research scientist at DiSTAP and co-first author of the paper.
“This small but mighty nanosensor tackles a long-standing challenge in agriculture: the need for a universal, real-time, and noninvasive tool to monitor plant health across various species. Our collaborative achievement not only empowers researchers and farmers to optimize growth conditions and improve crop yield and resilience, but also advances our scientific understanding of hormone pathways and plant-environment interactions,” says In-Cheol Jang, senior principal investigator at TLL, principal investigator at DiSTAP, and co-corresponding author of the paper.
Looking ahead, the research team is looking to combine multiple sensing platforms to simultaneously detect IAA and its related metabolites to create a comprehensive hormone signaling profile, offering deeper insights into plant stress responses and enhancing precision agriculture. They are also working on using microneedles for highly localized, tissue-specific sensing, and collaborating with industrial urban farming partners to translate the technology into practical, field-ready solutions.
The research was carried out by SMART, and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program. The universal nanosensor was developed in collaboration with Temasek Life Sciences Laboratory (TLL) and MIT.
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.Auxins, particularly IAA, play a ce
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.
Auxins, particularly IAA, play a central role in regulating key plant processes such as cell division, elongation, root and shoot development, and response to environmental cues like light, heat, and drought. External factors like light affect how auxin moves within the plant, temperature influences how much is produced, and a lack of water can disrupt hormone balance. When plants cannot effectively regulate auxins, they may not grow well, adapt to changing conditions, or produce as much food.
Existing IAA detection methods, such as liquid chromatography, require taking plant samples from the plant — which harms or removes part of it. Conventional methods also measure the effects of IAA rather than detecting it directly, and cannot be used universally across different plant types. In addition, since IAA are small molecules that cannot be easily tracked in real time, biosensors that contain fluorescent proteins need to be inserted into the plant’s genome to measure auxin, making it emit a fluorescent signal for live imaging.
SMART’s newly developed nanosensor enables direct, real-time tracking of auxin levels in living plants with high precision. The sensor uses near infrared imaging to monitor IAA fluctuations non-invasively across tissues like leaves, roots, and cotyledons, and it is capable of bypassing chlorophyll interference to ensure highly reliable readings even in densely pigmented tissues. The technology does not require genetic modification and can be integrated with existing agricultural systems — offering a scalable precision tool to advance both crop optimization and fundamental plant physiology research.
By providing real-time, precise measurements of auxin, the sensor empowers farmers with earlier and more accurate insights into plant health. With these insights and comprehensive data, farmers can make smarter, data-driven decisions on irrigation, nutrient delivery, and pruning, tailored to the plant’s actual needs — ultimately improving crop growth, boosting stress resilience, and increasing yields.
“We need new technologies to address the problems of food insecurity and climate change worldwide. Auxin is a central growth signal within living plants, and this work gives us a way to tap it to give new information to farmers and researchers,” says Michael Strano, co-lead principal investigator at DiSTAP, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper. “The applications are many, including early detection of plant stress, allowing for timely interventions to safeguard crops. For urban and indoor farms, where light, water, and nutrients are already tightly controlled, this sensor can be a valuable tool in fine-tuning growth conditions with even greater precision to optimize yield and sustainability.”
The research team documented the nanosensor’s development in a paper titled, “A Near-Infrared Fluorescent Nanosensor for Direct and Real-Time Measurement of Indole-3-Acetic Acid in Plants,” published in the journal ACS Nano. The sensor comprises single-walled carbon nanotubes wrapped in a specially designed polymer, which enables it to detect IAA through changes in near infrared fluorescence intensity. Successfully tested across multiple species, including Arabidopsis, Nicotiana benthamiana, choy sum, and spinach, the nanosensor can map IAA responses under various environmental conditions such as shade, low light, and heat stress.
“This sensor builds on DiSTAP’s ongoing work in nanotechnology and the CoPhMoRe technique, which has already been used to develop other sensors that can detect important plant compounds such as gibberellins and hydrogen peroxide. By adapting this approach for IAA, we’re adding to our inventory of novel, precise, and nondestructive tools for monitoring plant health. Eventually, these sensors can be multiplexed, or combined, to monitor a spectrum of plant growth markers for more complete insights into plant physiology,” saysDuc Thinh Khong, research scientist at DiSTAP and co-first author of the paper.
“This small but mighty nanosensor tackles a long-standing challenge in agriculture: the need for a universal, real-time, and noninvasive tool to monitor plant health across various species. Our collaborative achievement not only empowers researchers and farmers to optimize growth conditions and improve crop yield and resilience, but also advances our scientific understanding of hormone pathways and plant-environment interactions,” says In-Cheol Jang, senior principal investigator at TLL, principal investigator at DiSTAP, and co-corresponding author of the paper.
Looking ahead, the research team is looking to combine multiple sensing platforms to simultaneously detect IAA and its related metabolites to create a comprehensive hormone signaling profile, offering deeper insights into plant stress responses and enhancing precision agriculture. They are also working on using microneedles for highly localized, tissue-specific sensing, and collaborating with industrial urban farming partners to translate the technology into practical, field-ready solutions.
The research was carried out by SMART, and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program. The universal nanosensor was developed in collaboration with Temasek Life Sciences Laboratory (TLL) and MIT.
Lorena Tovar is an assistant director for academic programs for the MIT Department of Urban Studies and Planning (DUSP), where she runs the Master in City Planning (MCP) program. A longtime employee of MIT, she has gained a great breadth of institutional knowledge and values in the course of making connections with and supporting both faculty and students. Tovar joined DUSP in April 2024, but she has been an employee of the Institute for almost 15 years. She worked in the Office of Minority Educ
Lorena Tovar is an assistant director for academic programs for the MIT Department of Urban Studies and Planning (DUSP), where she runs the Master in City Planning (MCP) program. A longtime employee of MIT, she has gained a great breadth of institutional knowledge and values in the course of making connections with and supporting both faculty and students.
Tovar joined DUSP in April 2024, but she has been an employee of the Institute for almost 15 years. She worked in the Office of Minority Education (OME) and the Priscilla King Gray Public Service Center before leaving MIT for a couple years after becoming a parent. When she returned to MIT, she came across a project coordinator role at the Center for Constructive Communication (CCC) at the MIT Media Lab — a part of the Institute she had always been curious about. Tovar likens the environment at CCC to that of a startup, which made her time in a research center very different from her previous jobs working directly with students. CCC, she says, was an “all-hands-on-deck” environment where she worked on projects with external community members, a nonprofit partner, and CCC researchers and practitioners. This role was very different from her previous, and current, positions.
When Tovar saw that DUSP was hiring for a student-focused job, she applied, excited about the opportunity to connect with students, something that she missed while working at CCC. While she had previously worked with students, working in an academic department and overseeing a degree program means she now works with them in a different capacity.
A master’s in city planning provides students with the skills and specialized knowledge needed to fill traditional and emerging planning roles. At DUSP, a combination of social sciences, technology, and policy come together to enable students to support the development of cities in addition to making them more equitable. Within DUSP there are four different program groups (city design and development; housing, community, and economic development; international development; and environmental policy and planning) in addition to labs. Tovar refers to her position as “the front line for students” throughout their time in the two-year professional degree program. Currently, this means supporting about 135 students in their first or second year of the degree. She also works closely with faculty as the bridge between them and their students. This important connection has various aspects such as working with academic advisors to ensure students are excelling academically, and connecting students to proper resources and faculty members on campus as needed. Additionally, Tovar serves on the MCP Committee, which is responsible for overseeing and guiding various aspects of the MCP program, including academic policies, program requirements, admissions, and student progress.
Soundbytes
Q: What project that you worked on are you the proudest of?
A: During the historic Boston mayoral election of 2021, when I was working at the Media Lab, CCC created a civic initiative called Real Talk for Change Boston. I co-led the project management for the initiative, which had the goal of using technology to elevate storytelling from often-underheard voices. We worked with community leaders across Boston on social issues that residents face every day, which was especially important as we were just coming out of the pandemic. It was really cool to work with the professors and researchers in CCC as well as the community leaders. The conversations were powerful, and the goal was to get some of these voices in front of the actual mayoral candidates. Real Talk Fellows used human-led AI tools developed at CCC to review over 3,000 minutes of conversation for themes. This analysis was published and made available to the public and shared at a forum where highlights of these conversations were played for the candidates. The experience of working so closely with the community leaders in Boston shaped my personal interest in getting more involved in my own local community. It was really cool and inspiring to see an idea go from a spark in someone’s head to something real and impactful, and to be part of the collaboration, hard work, and heart that went into making it happen was a great experience.
Q: What do you like the most about the MIT community?
A: I came back to MIT because I missed the sense of everybody working toward a shared goal. Even though I’ve worked in different offices, each one has this sort of lens of caring for the world, making the world better, and making the community better as a whole. The positions I’ve held since returning to MIT really reflect my personal interest of tackling social challenges and using technology to elevate underheard voices. I really appreciate the community’s curiosity and openness.
Q: What advice would you give to a new staff member at MIT?
A: Don't be afraid to engage, ask questions, and find the things that you are interested in working on. There is an opportunity for employees to discover what their personal interests are and make it their job. Social justice is important to me, so I look for roles that align with that value. While my current role doesn’t focus on it directly, it’s been great to work alongside students and faculty who are doing impactful work to improve communities both locally and globally. I recently ran into someone that I met when we were both new at MIT. At the time, we were both in administrative, junior roles and the other person had an interest in communications. Today, they are a director of communications. Bringing your personal experiences and interests to your job can go a long way.
An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.To help such a drone stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable
An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.
To help such a drone stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable forces like gusty winds.
Unlike standard approaches, the new technique does not require the person programming the autonomous drone to know anything in advance about the structure of these uncertain disturbances. Instead, the control system’s artificial intelligence model learns all it needs to know from a small amount of observational data collected from 15 minutes of flight time.
Importantly, the technique automatically determines which optimization algorithm it should use to adapt to the disturbances, which improves tracking performance. It chooses the algorithm that best suits the geometry of specific disturbances this drone is facing.
The researchers train their control system to do both things simultaneously using a technique called meta-learning, which teaches the system how to adapt to different types of disturbances.
Taken together, these ingredients enable their adaptive control system to achieve 50 percent less trajectory tracking error than baseline methods in simulations and perform better with new wind speeds it didn’t see during training.
In the future, this adaptive control system could help autonomous drones more efficiently deliver heavy parcels despite strong winds or monitor fire-prone areas of a national park.
“The concurrent learning of these components is what gives our method its strength. By leveraging meta-learning, our controller can automatically make choices that will be best for quick adaptation,” says Navid Azizan, who is the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), a principal investigator of the Laboratory for Information and Decision Systems (LIDS), and the senior author of a paper on this control system.
Azizan is joined on the paper by lead author Sunbochen Tang, a graduate student in the Department of Aeronautics and Astronautics, and Haoyuan Sun, a graduate student in the Department of Electrical Engineering and Computer Science. The research was recently presented at the Learning for Dynamics and Control Conference.
Finding the right algorithm
Typically, a control system incorporates a function that models the drone and its environment, and includes some existing information on the structure of potential disturbances. But in a real world filled with uncertain conditions, it is often impossible to hand-design this structure in advance.
Many control systems use an adaptation method based on a popular optimization algorithm, known as gradient descent, to estimate the unknown parts of the problem and determine how to keep the drone as close as possible to its target trajectory during flight. However, gradient descent is only one algorithm in a larger family of algorithms available to choose, known as mirror descent.
“Mirror descent is a general family of algorithms, and for any given problem, one of these algorithms can be more suitable than others. The name of the game is how to choose the particular algorithm that is right for your problem. In our method, we automate this choice,” Azizan says.
In their control system, the researchers replaced the function that contains some structure of potential disturbances with a neural network model that learns to approximate them from data. In this way, they don’t need to have an a priori structure of the wind speeds this drone could encounter in advance.
Their method also uses an algorithm to automatically select the right mirror-descent function while learning the neural network model from data, rather than assuming a user has the ideal function picked out already. The researchers give this algorithm a range of functions to pick from, and it finds the one that best fits the problem at hand.
“Choosing a good distance-generating function to construct the right mirror-descent adaptation matters a lot in getting the right algorithm to reduce the tracking error,” Tang adds.
Learning to adapt
While the wind speeds the drone may encounter could change every time it takes flight, the controller’s neural network and mirror function should stay the same so they don’t need to be recomputed each time.
To make their controller more flexible, the researchers use meta-learning, teaching it to adapt by showing it a range of wind speed families during training.
“Our method can cope with different objectives because, using meta-learning, we can learn a shared representation through different scenarios efficiently from data,” Tang explains.
In the end, the user feeds the control system a target trajectory and it continuously recalculates, in real-time, how the drone should produce thrust to keep it as close as possible to that trajectory while accommodating the uncertain disturbance it encounters.
In both simulations and real-world experiments, the researchers showed that their method led to significantly less trajectory tracking error than baseline approaches with every wind speed they tested.
“Even if the wind disturbances are much stronger than we had seen during training, our technique shows that it can still handle them successfully,” Azizan adds.
In addition, the margin by which their method outperformed the baselines grew as the wind speeds intensified, showing that it can adapt to challenging environments.
The team is now performing hardware experiments to test their control system on real drones with varying wind conditions and other disturbances.
They also want to extend their method so it can handle disturbances from multiple sources at once. For instance, changing wind speeds could cause the weight of a parcel the drone is carrying to shift in flight, especially when the drone is carrying sloshing payloads.
They also want to explore continual learning, so the drone could adapt to new disturbances without the need to also be retrained on the data it has seen so far.
“Navid and his collaborators have developed breakthrough work that combines meta-learning with conventional adaptive control to learn nonlinear features and the suitable adaptation law from data. Key to their approach is the use of mirror descent techniques that exploit the underlying geometry of the problem and do so automatically. Their work can contribute significantly to the design of autonomous systems that need to operate in complex and uncertain environments,” says Babak Hassibi, the Mose and Lillian S. Bohn Professor of Electrical Engineering and Computing and Mathematical Sciences at Caltech, who was not involved with this work.
This research was supported, in part, by MathWorks, the MIT-IBM Watson AI Lab, the MIT-Amazon Science Hub, and the MIT-Google Program for Computing Innovation.
Will the perfect storm of potentially life-changing, artificial intelligence-driven health care and the desire to increase profits through subscription models alienate vulnerable patients?For the third year in a row, MIT's Envisioning the Future of Computing Prize asked students to describe, in 3,000 words or fewer, how advancements in computing could shape human society for the better or worse. All entries were eligible to win a number of cash prizes. Inspired by recent research on the greater
Will the perfect storm of potentially life-changing, artificial intelligence-driven health care and the desire to increase profits through subscription models alienate vulnerable patients?
For the third year in a row, MIT's Envisioning the Future of Computing Prize asked students to describe, in 3,000 words or fewer, how advancements in computing could shape human society for the better or worse. All entries were eligible to win a number of cash prizes.
Inspired by recent research on the greater effect microbiomes have on overall health, MIT-WHOI Joint Program in Oceanography and Applied Ocean Science and Engineering PhD candidate Annaliese Meyer created the concept of “B-Bots,” a synthetic bacterial mimic designed to regulate gut biomes and activated by Bluetooth.
For the contest, which challenges MIT students to articulate their musings for what a future driven by advances in computing holds, Meyer submitted a work of speculative fiction about how recipients of a revolutionary new health-care technology find their treatment in jeopardy with the introduction of a subscription-based pay model.
In her winning paper, titled “(Pre/Sub)scribe,” Meyer chronicles the usage of B-Bots from the perspective of both their creator and a B-Bots user named Briar. They celebrate the effects of the supplement, helping them manage vitamin deficiencies and chronic conditions like acid reflux and irritable bowel syndrome. Meyer says that the introduction of a B-Bots subscription model “seemed like a perfect opportunity to hopefully make clear that in a for-profit health-care system, even medical advances that would, in theory, be revolutionary for human health can end up causing more harm than good for the many people on the losing side of the massive wealth disparity in modern society.” Meyer also states that these opinions are her own and do not reflect any official stances of affiliated institutions.
As a Canadian, Meyer has experienced the differences between the health care systems in the United States and Canada. She recounts her mother’s recent cancer treatments, emphasizing the cost and coverage of treatments in British Columbia when compared to the U.S.
Aside from a cautionary tale of equity in the American health care system, Meyer hopes readers take away an additional scientific message on the complexity of gut microbiomes. Inspired by her thesis work in ocean metaproteomics, Meyer says, “I think a lot about when and why microbes produce different proteins to adapt to environmental changes, and how that depends on the rest of the microbial community and the exchange of metabolic products between organisms.”
Meyer had hoped to participate in the previous year’s contest, but the time constraints of her lab work put her submission on hold. Now in the midst of thesis work, she saw the contest as a way to add some variety to what she was writing while keeping engaged with her scientific interests. However, writing has always been a passion. “I wrote a lot as a kid (‘author’ actually often preceded ‘scientist’ as my dream job while I was in elementary school), and I still write fiction in my spare time,” she says.
Named the winner of the $10,000 grand prize, Meyer says the essay and presentation preparation were extremely rewarding.
“The chance to explore a new topic area which, though related to my field, was definitely out of my comfort zone, really pushed me as a writer and a scientist. It got me reading papers I’d never have found before, and digging into concepts that I’d barely ever encountered. (Did I have any real understanding of the patent process prior to this? Absolutely not.) The presentation dinner itself was a ton of fun; it was great to both be able to celebrate with my friends and colleagues as well as meet people from a bunch of different fields and departments around MIT.”
Envisioning the future of the computing prize
Co-sponsored by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing and the School of Humanities, Arts, and Social Sciences (SHASS), with support from MAC3 Philanthropies, the contest this year attracted 65 submissions from undergraduate and graduate students across various majors, including brain and cognitive sciences, economics, electrical engineering and computer science, physics, anthropology, and others.
Caspar Hare, associate dean of SERC and professor of philosophy, launched the prize in 2023. He says that the object of the prize was “to encourage MIT students to think about what they’re doing, not just in terms of advancing computing-related technologies, but also in terms of how the decisions they make may or may not work to our collective benefit.”
He emphasized that the Envisioning the Future of Computing prize will continue to remain “interesting and important” to the MIT community. There are plans in place to tweak next year’s contest, offering more opportunities for workshops and guidance for those interested in submitting essays.
“Everyone is excited to continue this for as long as it remains relevant, which could be forever,” he says, suggesting that in years to come the prize could give us a series of historical snapshots of what computing-related technologies MIT students found most compelling.
“Computing-related technology is going to be transforming and changing the world. MIT students will remain a big part of that.”
Crowning a winner
As part of a two-stage evaluation process, all the submitted essays were reviewed anonymously by a committee of faculty members from the college, SHASS, and the Department of Urban Studies and Planning. The judges moved forward three finalists based on the papers that were deemed to be the most articulate, thorough, grounded, imaginative, and inspiring.
In early May, a live awards ceremony was held where the finalists were invited to give 20-minute presentations on their entries and took questions from the audience. Nearly 140 MIT community members, family members, and friends attended the ceremony in support of the finalists. The audience members and judging panel asked the presenters challenging and thoughtful questions on the societal impact of their fictional computing technologies.
A final tally, which comprised 75 percent of their essay score and 25 percent of their presentation score, determined the winner.
This year’s judging panel included:
Marzyeh Ghassemi, associate professor, Department of Electrical Engineering and Computer Science and Institute for Medical Engineering and Science;
Caspar Hare, associate dean of SERC and professor of philosophy;
Jason Jackson, associate professor in political economy and urban planning;
Brad Skow, professor of philosophy;
Armando Solar-Lezama, Distinguished Professor of Computing; and
Nikos Trichakis, associate dean of SERC and J.C. Penney Associate Professor of Management.
The judges also awarded $5,000 to the two runners-up: Martin Staadecker, a graduate student in the Technology and Policy Program in the Institute for Data, Systems, and Society, for his essay on a fictional token-based system to track fossil fuels, and Juan Santoyo, a PhD candidate in the Department of Brain and Cognitive Sciences, for his short story of a field-deployed AI designed to help the mental health of soldiers in times of conflict. In addition, eight honorable mentions were recognized, with each receiving a cash prize of $1,000.
Mark Wilson/Getty Images
Nation & World
Youth gun deaths rise in states that relaxed laws
Study compares child mortality rates before and after 2010 Supreme Court ruling
Mass General Brigham Communications
June 9, 2025
2 min read
Child gun deaths have surged since a 2010 Supreme Court ruling led some state and local governments to relax their firearm laws, according to a new Mass General Brigha
Study compares child mortality rates before and after 2010 Supreme Court ruling
Mass General Brigham Communications
2 min read
Child gun deaths have surged since a 2010 Supreme Court ruling led some state and local governments to relax their firearm laws, according to a new Mass General Brigham study.
Guns are the leading cause of death for youth in the U.S. but little is known about how firearm laws affect child mortality rates. To investigate, researchers looked at whether gun deaths among youth had changed in the years following a Supreme Court ruling that applied the Second Amendment to state and local governments.
They found in states with the most permissive laws evidence of 6,029 more child deaths due to firearms than would have been expected based on the existing demographic trends — and more than 1,400 excess deaths in states with permissive firearm laws. Rates remained unchanged or decreased in states with stricter laws. The results are published in JAMA Pediatrics.
“Gun laws truly make a difference for the collective safety of children.”
Onyekachi T. Otugo, study author
“We saw over 7,400 more pediatric deaths due to firearms than would have been expected,” said first author Jeremy Faust, an emergency physician at Brigham and Women’s Hospital and instructor at Harvard Medical School. “And when checked against other causes of death, including homicides and suicides not involving firearms, there were not similar changes. This shows that differences in firearm laws matter.”
The study categorized states as either most permissive, permissive, or strict based on gun ownership and use policies, and compared their pediatric firearm mortality rates before the ruling (from 1999-2010) and after the ruling (2011-2023). The researchers also found that existing disparities for pediatric firearm deaths among Black youth increased in permissive states, and persisted, but did not increase, in states with more strict laws. The team plans to share their findings with policymakers and stakeholders and hopes to see future research identify which specific policies are most effective.
“Addressing the epidemic of pediatric firearm mortality requires collective action and policy change,” said Onyekachi T. Otugo, an author on the study and an emergency physician at Brigham and Women’s Hospital. “Gun laws truly make a difference for the collective safety of children.”
Veasey Conway/Harvard Staff Photographer
Health
Son’s diabetes diagnosis sent scientist on quest for cure
Kermit Pattison
Harvard Staff Writer
June 9, 2025
8 min read
Decades later, Doug Melton and team are testing treatment that could make insulin shots obsolete
Doug Melton’s life irrevocably changed the day his child was diagnosed with a life-threatening disease. But unlike most other
Son’s diabetes diagnosis sent scientist on quest for cure
Kermit Pattison
Harvard Staff Writer
8 min read
Decades later, Doug Melton and team are testing treatment that could make insulin shots obsolete
Doug Melton’s life irrevocably changed the day his child was diagnosed with a life-threatening disease. But unlike most other parents in that situation, he was a molecular biologist uniquely positioned to do something about it.
Now, more than 30 years later, Melton and his colleagues are within sight of a new treatment for Type 1 diabetes that uses stem cells to make healthy insulin-producing cells that can be transplanted into patients. Vertex Pharmaceuticals, a biomedical company headquartered in Boston, is running clinical trials on methods pioneered by Melton and his colleagues at Harvard and a startup company that he founded.
“There are few things better than having an interesting science puzzle,” Melton said. “Especially one which involves educating people and that, if you’re successful, does some good for people in the world.”
Melton recently was named Harvard’s first Catalyst Professor, a senior faculty role that aims to foster collaboration with the private sector. The professorship allows distinguished faculty to engage in external opportunities while maintaining their teaching commitments and contributions to the University’s academic mission.
“It is hard to imagine a better example of how basic scientific discovery paves the way for breakthroughs in medicine.”
Hopi Hoekstra
Hopi Hoekstra, Edgerley Family Dean of the Faculty of Arts and Sciences, hailed Melton’s work as a prime example of scientific discovery generating an advance in medicine.
“At a time when investment in science is under attack, it is hard to imagine a better example of how basic scientific discovery paves the way for breakthroughs in medicine, and how the research done at Harvard can improve the health of everyday people,” she said.
Hatching a biologist
As a child, Melton became captivated by biology. One question especially puzzled him: How did single-celled eggs grow into complex animals with billions of specialized cells? “I remember as a boy in Chicago wondering how the eggs in the pond knew whether to make a salamander or a frog,” he recalled. “That really started me on a career in science.”
That question continued to propel his career. Melton graduated with a biology degree from the University of Illinois and won a Marshall Scholarship to study at University of Cambridge, where he earned another bachelor’s degree in the history and philosophy of science and a Ph.D. in molecular biology. In 1981, he landed at Harvard and spent a decade studying early development in frogs and mice. He planned to spend his career investigating how bodies formed in vertebrates.
His life took a sudden turn in 1991 when his infant son, Sam, was diagnosed with Type 1 diabetes, a disease in which the immune system attacks and destroys beta cells, the parts of the pancreas that produce insulin, the hormone that regulates our blood sugar. Such patients are forced to rely on external sources of insulin. “I didn’t really even know what that meant,” Melton recalled. “But we quickly found out. My wife was spending all her time really being Sam’s pancreas, injecting him with insulin.”
With two young children, Melton and his wife were overwhelmed. Half in jest, she turned to her husband and suggested he make himself productive. “She looked at me and said, ‘You know, you’re kind of useless,’” he recalled. “‘You’re supposed to be able to do something. Why don’t you work on this?’”
So he did. Melton switched his research to diabetes. His jump was not as radical as it might seem: The development of tissues and organs involved the same mystery that had captivated him from the beginning — how did genes encode the signals that guided the division and differentiation of cells? He began researching how beta cells form in frogs and mice and eventually came to an emerging realm of biology — stem cells. These developmental cells are the precursors that differentiate into all cell types in the body. An idea hatched in his imagination: taking embryonic stem cells and manipulating them to become the cells that produce insulin.
“It never occurred to me that it couldn’t be done,” said Melton. “I just didn’t know how to do it.”
Rising global burden
Diabetes is a disorder in which the body cannot properly metabolize glucose, the blood sugar that is our main source of energy. Normally, glucose is regulated by insulin, produced by beta cells in clusters of endocrine cells called islets of Langerhans scattered throughout the pancreas.
In Type 1 — which can appear any time but often during childhood — the body’s own immune system attacks and destroys the beta cells. In Type 2 diabetes — which often appears later in life and frequently is linked to obesity — beta cells become dysfunctional and fail to supply sufficient insulin.
38 millionAmericans had diabetes in 2021, according to the CDC
According to the U.S. Centers for Disease Control, more than 38 million Americans, roughly 11 percent of the population, had diabetes in 2021. It is the eighth leading cause of death in the country.
The burden of diabetes also is rising around the globe, particularly in low- and middle-income countries. According to the International Diabetes Federation, the disease afflicts some 589 million adults around the world, or about 11 percent of the global adult population.
For Melton, this global mission became even more personal. About 10 years after his son was diagnosed with Type 1, his daughter, Emma (then 14), developed the same disease.
Supported through Bush-era cuts
A breakthrough occurred more than 100 years ago with the development of exogenous insulin treatment, originally delivered by injections and now commonly by insulin pumps. But these advances still require external sources of insulin and are treatments — not cures.
Melton has sought a cure by deciphering the developmental biology of the beta cells. What were the developmental steps that turned a stem cell into a beta cell that produced insulin? Could scientists reproduce those events to engineer beta cells that could be transplanted into patients?
His research was bold, and Harvard was the rare place where he could do it. He recalled the University’s support when, in 2001, then-President George W. Bush suspended federal funding for research on human stem cells and later limited federal funding to existing lines of stem cells. The University constructed a new lab for Melton to ensure that his research remained separated from federally funded work. In 2004 Melton and his colleague David Scadden founded the Harvard Stem Cell Institute, a collaboration that now involves more than 350 research faculty.
“I’m proud to say Harvard supported me and we created about 300 stem cell lines and sent them to researchers throughout the world for free,” said Melton. “That really helped the whole field grow.”
Copying nature
Over the decades, Melton and his colleagues made a series of discoveries that laid the groundwork for a new treatment to restore insulin production in patients with Type 1 diabetes.
Melton compares this stem cell-derived islet therapy to “educating” a stem cell and its descendants — introducing the protein signals that trigger or inhibit developmental processes. All told, the method delivers 15 signaling proteins at specific times and places in six stages over 30 days to turn a stem cell into an insulin-producing beta cell. These cells are then transplanted into patients with Type 1 diabetes.
“I’m proud to say Harvard supported me and we created about 300 stem cell lines and sent them to researchers throughout the world for free.”
Doug Melton
After demonstrating a method to create beta cells in 2014, Melton founded the company Semma Therapeutics (the name is a combination of the names of his two children) to develop a commercial application. In 2019, the company was acquired by Vertex, and it now conducts clinical trials for people with Type 1. Melton said more than a dozen patients have completed the trial and “more than a handful” who have taken this new treatment are “insulin independent” — meaning they have not required additional exogenous insulin thus far.
Continuous glucose monitors measure blood sugar every 15 minutes, but a beta cell does so 1,000 times per second. “I’m not inventing anything,” says Melton. “I’m trying to copy nature.”
A place for science
The new stem-cell-derived therapy represents the first time a fully differentiated human cell has been cultured in the lab from stem cells and then introduced into human clinical trials. Melton says the technique might eventually be adapted to treat Type 2 diabetes. The method also provides insights into how stem cells might be used for other therapies, such as making dopamine-producing brain cells to treat Parkinson’s disease.
“Harvard is the kind of place where you can take a problem that you might not solve in a year, or even five years or maybe 10 years,” said Melton. “I think that’s one of the great things our institutions can do.”
Harvard also has been a great place to nurture young talent — and learn from them, Melton said. The scientist has employed about 50 undergraduates, graduate students, and postdocs in his lab. He also has enjoyed teaching classes such as developmental biology and “Frontiers in Therapeutics: Science of Health,” which explores how basic science can be applied to unsolved medical problems.
“I like teaching undergraduates because, on the whole, they come with fewer prejudices or preconceived notions about what’s worth doing and how to do it,” said Melton. “That challenges my own thinking about what we’re doing. There is an additional motivation — to entice some of the bright young undergraduates for a career in science.”
The scholarships, aimed at exceptionally high-potential domestic and international students, will support study towards AI-related Masters degrees and provide an unparalleled package of benefits. Students will receive full tuition fees, a living stipend, and access to priority work placements with leading UK artificial intelligence companies and government institutions.
The programme, which will open to its first cohort in the 2026–27 academic year, intends to enrol 100 scholars over its first
The scholarships, aimed at exceptionally high-potential domestic and international students, will support study towards AI-related Masters degrees and provide an unparalleled package of benefits. Students will receive full tuition fees, a living stipend, and access to priority work placements with leading UK artificial intelligence companies and government institutions.
The programme, which will open to its first cohort in the 2026–27 academic year, intends to enrol 100 scholars over its first 4 years. Scholars will be selected from the top 1% of AI talent worldwide, with applicants required to demonstrate academic excellence, leadership, and ambassadorial potential, alongside a STEM background.
Uniquely, the Spärck AI Scholarships will provide its students with priority access to work placements within UK-based AI companies and organisations, including the UK government’s AI Security Institute (AISI) and i.AI, their in-house AI incubator.
The scholarships are named in honour of Professor Karen Spärck Jones (1935–2007), pictured, a pioneering British computer scientist whose ground breaking work at Cambridge laid the foundations for modern search engines and natural language processing. One of the most remarkable women in computer science, her seminal 1972 paper introduced the concept of inverse document frequency, a fundamental principle still central to information retrieval today.
Professor Deborah Prentice, University of Cambridge Vice-Chancellor, said: “Cambridge combines academic excellence with a dynamic, interdisciplinary AI community, from foundational research to real-world impact. We are delighted to be a founding partner in this ambitious initiative, which reflects a shared commitment to attracting exceptional talent and reinforcing the UK’s position as a home for world-class AI. We are especially proud that these scholarships are named after Karen Spärck Jones, a brilliant Cambridge computer scientist.”
A long-time member of the Cambridge community, Professor Spärck Jones was an undergraduate at Girton College (1953 to 1956), a Research Fellow at Newnham College (1965 to 1968), an Official Fellow of Darwin College (1968 to 1980) and a Fellow of Wolfson College (2000 to 2007).
She began her research career at the Cambridge Language Research Unit in the late 1950s and later taught for the MPhil in Computer Speech and Language Processing, on language systems, and for the Computer Science Tripos on information retrieval. She supervised many Cambridge PhD students across a wide range of topics and was a tireless advocate for women in computing, famously declaring: “I think it's very important to get more women into computing. My slogan is: computing is too important to be left to men.”
Her international influence was recognised by numerous awards, including the ACM SIGIR Salton Award, the BCS Lovelace Medal, and election as a Fellow of the British Academy (of which she was also Vice-President from 2000 to 2002) and the American Association for Artificial Intelligence.
The University of Cambridge is delighted to honour her legacy by co-founding this exciting new programme, which was formally announced at London Tech Week.
On 9 June, the Department for Science, Innovation and Technology (DSIT) announced the launch of the Spärck AI Scholarships, a major new initiative to nurture the next generation of AI leaders, with Cambridge proud to join as a founding partner.
We are delighted to be a founding partner in this ambitious initiative, which reflects a shared commitment to attracting exceptional talent and reinforcing the UK’s position as a home for world-class AI.
Professor Deborah Prentice, University of Cambridge Vice-Chancellor,
Fusion energy has the potential to enable the energy transition from fossil fuels, enhance domestic energy security, and power artificial intelligence. Private companies have already invested more than $8 billion to develop commercial fusion and seize the opportunities it offers. An urgent challenge, however, is the discovery and evaluation of cost-effective materials that can withstand extreme conditions for extended periods, including 150-million-degree plasmas and intense particle bombardment
Fusion energy has the potential to enable the energy transition from fossil fuels, enhance domestic energy security, and power artificial intelligence. Private companies have already invested more than $8 billion to develop commercial fusion and seize the opportunities it offers. An urgent challenge, however, is the discovery and evaluation of cost-effective materials that can withstand extreme conditions for extended periods, including 150-million-degree plasmas and intense particle bombardment.
To meet this challenge, MIT’s Plasma Science and Fusion Center (PSFC) has launched the Schmidt Laboratory for Materials in Nuclear Technologies, or LMNT (pronounced “element”). Backed by a philanthropic consortium led by Eric and Wendy Schmidt, LMNT is designed to speed up the discovery and selection of materials for a variety of fusion power plant components.
By drawing on MIT's expertise in fusion and materials science, repurposing existing research infrastructure, and tapping into its close collaborations with leading private fusion companies, the PSFC aims to drive rapid progress in the materials that are necessary for commercializing fusion energy on rapid timescales. LMNT will also help develop and assess materials for nuclear power plants, next-generation particle physics experiments, and other science and industry applications.
Zachary Hartwig, head of LMNT and an associate professor in the Department of Nuclear Science and Engineering (NSE), says, “We need technologies today that will rapidly develop and test materials to support the commercialization of fusion energy. LMNT’s mission includes discovery science but seeks to go further, ultimately helping select the materials that will be used to build fusion power plants in the coming years.”
A different approach to fusion materials
For decades, researchers have worked to understand how materials behave under fusion conditions using methods like exposing test specimens to low-energy particle beams, or placing them in the core of nuclear fission reactors. These approaches, however, have significant limitations. Low-energy particle beams only irradiate the thinnest surface layer of materials, while fission reactor irradiation doesn’t accurately replicate the mechanism by which fusion damages materials. Fission irradiation is also an expensive, multiyear process that requires specialized facilities.
To overcome these obstacles, researchers at MIT and peer institutions are exploring the use of energetic beams of protons to simulate the damage materials undergo in fusion environments. Proton beams can be tuned to match the damage expected in fusion power plants, and protons penetrate deep enough into test samples to provide insights into how exposure can affect structural integrity. They also offer the advantage of speed: first, intense proton beams can rapidly damage dozens of material samples at once, allowing researchers to test them in days, rather than years. Second, high-energy proton beams can be generated with a type of particle accelerator known as a cyclotron commonly used in the health-care industry. As a result, LMNT will be built around a cost-effective, off-the-shelf cyclotron that is easy to obtain and highly reliable.
LMNT will surround its cyclotron with four experimental areas dedicated to materials science research. The lab is taking shape inside the large shielded concrete vault at PSFC that once housed the Alcator C-Mod tokamak, a record-setting fusion experiment that ran at the PSFC from 1992 to 2016. By repurposing C-Mod’s former space, the center is skipping the need for extensive, costly new construction and accelerating the research timeline significantly. The PSFC’s veteran team — who have led major projects like the Alcator tokamaks and advanced high-temperature superconducting magnet development — are overseeing the facilities design, construction, and operation, ensuring LMNT moves quickly from concept to reality. The PSFC expects to receive the cyclotron by the end of 2025, with experimental operations starting in early 2026.
“LMNT is the start of a new era of fusion research at MIT, one where we seek to tackle the most complex fusion technology challenges on timescales commensurate with the urgency of the problem we face: the energy transition,” says Nuno Loureiro, director of the PSFC, a professor of nuclear science and engineering, and the Herman Feshbach Professor of Physics. “It’s ambitious, bold, and critical — and that’s exactly why we do it.”
“What’s exciting about this project is that it aligns the resources we have today — substantial research infrastructure, off-the-shelf technologies, and MIT expertise — to address the key resource we lack in tackling climate change: time. Using the Schmidt Laboratory for Materials in Nuclear Technologies, MIT researchers advancing fusion energy, nuclear power, and other technologies critical to the future of energy will be able to act now and move fast,” says Elsa Olivetti, the Jerry McAfee Professor in Engineering and a mission director of MIT’s Climate Project.
In addition to advancing research, LMNT will provide a platform for educating and training students in the increasingly important areas of fusion technology. LMNT’s location on MIT’s main campus gives students the opportunity to lead research projects and help manage facility operations. It also continues the hands-on approach to education that has defined the PSFC, reinforcing that direct experience in large-scale research is the best approach to create fusion scientists and engineers for the expanding fusion industry workforce.
Benoit Forget, head of NSE and the Korea Electric Power Professor of Nuclear Engineering, notes, “This new laboratory will give nuclear science and engineering students access to a unique research capability that will help shape the future of both fusion and fission energy.”
Accelerating progress on big challenges
Philanthropic support has helped LMNT leverage existing infrastructure and expertise to move from concept to facility in just one-and-a-half years — a fast timeline for establishing a major research project.
“I’m just as excited about this research model as I am about the materials science. It shows how focused philanthropy and MIT’s strengths can come together to build something that’s transformational — a major new facility that helps researchers from the public and private sectors move fast on fusion materials,” emphasizes Hartwig.
By utilizing this approach, the PSFC is executing a major public-private partnership in fusion energy, realizing a research model that the U.S. fusion community has only recently started to explore, and demonstrating the crucial role that universities can play in the acceleration of the materials and technology required for fusion energy.
“Universities have long been at the forefront of tackling society’s biggest challenges, and the race to identify new forms of energy and address climate change demands bold, high-risk, high-reward approaches,” says Ian Waitz, MIT’s vice president for research. “LMNT is helping turn fusion energy from a long-term ambition into a near-term reality.”
The Schmidt Laboratory for Materials in Nuclear Technologies (LMNT), made possible by a group of donors led by Eric and Wendy Schmidt, will be housed at MIT’s Plasma Science and Fusion Center and use a compact cyclotron to accelerate the testing of materials for use in tomorrow’s commercial fusion power plants.
Fusion energy has the potential to enable the energy transition from fossil fuels, enhance domestic energy security, and power artificial intelligence. Private companies have already invested more than $8 billion to develop commercial fusion and seize the opportunities it offers. An urgent challenge, however, is the discovery and evaluation of cost-effective materials that can withstand extreme conditions for extended periods, including 150-million-degree plasmas and intense particle bombardment
Fusion energy has the potential to enable the energy transition from fossil fuels, enhance domestic energy security, and power artificial intelligence. Private companies have already invested more than $8 billion to develop commercial fusion and seize the opportunities it offers. An urgent challenge, however, is the discovery and evaluation of cost-effective materials that can withstand extreme conditions for extended periods, including 150-million-degree plasmas and intense particle bombardment.
To meet this challenge, MIT’s Plasma Science and Fusion Center (PSFC) has launched the Schmidt Laboratory for Materials in Nuclear Technologies, or LMNT (pronounced “element”). Backed by a philanthropic consortium led by Eric and Wendy Schmidt, LMNT is designed to speed up the discovery and selection of materials for a variety of fusion power plant components.
By drawing on MIT's expertise in fusion and materials science, repurposing existing research infrastructure, and tapping into its close collaborations with leading private fusion companies, the PSFC aims to drive rapid progress in the materials that are necessary for commercializing fusion energy on rapid timescales. LMNT will also help develop and assess materials for nuclear power plants, next-generation particle physics experiments, and other science and industry applications.
Zachary Hartwig, head of LMNT and an associate professor in the Department of Nuclear Science and Engineering (NSE), says, “We need technologies today that will rapidly develop and test materials to support the commercialization of fusion energy. LMNT’s mission includes discovery science but seeks to go further, ultimately helping select the materials that will be used to build fusion power plants in the coming years.”
A different approach to fusion materials
For decades, researchers have worked to understand how materials behave under fusion conditions using methods like exposing test specimens to low-energy particle beams, or placing them in the core of nuclear fission reactors. These approaches, however, have significant limitations. Low-energy particle beams only irradiate the thinnest surface layer of materials, while fission reactor irradiation doesn’t accurately replicate the mechanism by which fusion damages materials. Fission irradiation is also an expensive, multiyear process that requires specialized facilities.
To overcome these obstacles, researchers at MIT and peer institutions are exploring the use of energetic beams of protons to simulate the damage materials undergo in fusion environments. Proton beams can be tuned to match the damage expected in fusion power plants, and protons penetrate deep enough into test samples to provide insights into how exposure can affect structural integrity. They also offer the advantage of speed: first, intense proton beams can rapidly damage dozens of material samples at once, allowing researchers to test them in days, rather than years. Second, high-energy proton beams can be generated with a type of particle accelerator known as a cyclotron commonly used in the health-care industry. As a result, LMNT will be built around a cost-effective, off-the-shelf cyclotron that is easy to obtain and highly reliable.
LMNT will surround its cyclotron with four experimental areas dedicated to materials science research. The lab is taking shape inside the large shielded concrete vault at PSFC that once housed the Alcator C-Mod tokamak, a record-setting fusion experiment that ran at the PSFC from 1992 to 2016. By repurposing C-Mod’s former space, the center is skipping the need for extensive, costly new construction and accelerating the research timeline significantly. The PSFC’s veteran team — who have led major projects like the Alcator tokamaks and advanced high-temperature superconducting magnet development — are overseeing the facilities design, construction, and operation, ensuring LMNT moves quickly from concept to reality. The PSFC expects to receive the cyclotron by the end of 2025, with experimental operations starting in early 2026.
“LMNT is the start of a new era of fusion research at MIT, one where we seek to tackle the most complex fusion technology challenges on timescales commensurate with the urgency of the problem we face: the energy transition,” says Nuno Loureiro, director of the PSFC, a professor of nuclear science and engineering, and the Herman Feshbach Professor of Physics. “It’s ambitious, bold, and critical — and that’s exactly why we do it.”
“What’s exciting about this project is that it aligns the resources we have today — substantial research infrastructure, off-the-shelf technologies, and MIT expertise — to address the key resource we lack in tackling climate change: time. Using the Schmidt Laboratory for Materials in Nuclear Technologies, MIT researchers advancing fusion energy, nuclear power, and other technologies critical to the future of energy will be able to act now and move fast,” says Elsa Olivetti, the Jerry McAfee Professor in Engineering and a mission director of MIT’s Climate Project.
In addition to advancing research, LMNT will provide a platform for educating and training students in the increasingly important areas of fusion technology. LMNT’s location on MIT’s main campus gives students the opportunity to lead research projects and help manage facility operations. It also continues the hands-on approach to education that has defined the PSFC, reinforcing that direct experience in large-scale research is the best approach to create fusion scientists and engineers for the expanding fusion industry workforce.
Benoit Forget, head of NSE and the Korea Electric Power Professor of Nuclear Engineering, notes, “This new laboratory will give nuclear science and engineering students access to a unique research capability that will help shape the future of both fusion and fission energy.”
Accelerating progress on big challenges
Philanthropic support has helped LMNT leverage existing infrastructure and expertise to move from concept to facility in just one-and-a-half years — a fast timeline for establishing a major research project.
“I’m just as excited about this research model as I am about the materials science. It shows how focused philanthropy and MIT’s strengths can come together to build something that’s transformational — a major new facility that helps researchers from the public and private sectors move fast on fusion materials,” emphasizes Hartwig.
By utilizing this approach, the PSFC is executing a major public-private partnership in fusion energy, realizing a research model that the U.S. fusion community has only recently started to explore, and demonstrating the crucial role that universities can play in the acceleration of the materials and technology required for fusion energy.
“Universities have long been at the forefront of tackling society’s biggest challenges, and the race to identify new forms of energy and address climate change demands bold, high-risk, high-reward approaches,” says Ian Waitz, MIT’s vice president for research. “LMNT is helping turn fusion energy from a long-term ambition into a near-term reality.”
The Schmidt Laboratory for Materials in Nuclear Technologies (LMNT), made possible by a group of donors led by Eric and Wendy Schmidt, will be housed at MIT’s Plasma Science and Fusion Center and use a compact cyclotron to accelerate the testing of materials for use in tomorrow’s commercial fusion power plants.
The Cambridge x Manchester Innovation Partnership – the first trans-UK innovation collaboration of its kind – will receive £4.8m of funding from Research England over 3 years, it has been announced. With further investment from the 2 universities, the total funding for the partnership will be £6m. The initiative aims to strengthen research networks, accelerate scale-up growth, drive private sector investment into research and development, and attract new foreign direct investment.
Led by the un
The Cambridge x Manchester Innovation Partnership – the first trans-UK innovation collaboration of its kind – will receive £4.8m of funding from Research England over 3 years, it has been announced. With further investment from the 2 universities, the total funding for the partnership will be £6m. The initiative aims to strengthen research networks, accelerate scale-up growth, drive private sector investment into research and development, and attract new foreign direct investment.
Led by the universities of Cambridge and Manchester, ‘CBG×MCR’ is supported by 2 mayoral combined authorities, city councils, key businesses (such as AstraZeneca, ARM, ROKU, and Microsoft), venture capitalists (Northern Gritstone and CIC), and angel investors (Cambridge and Manchester Angels).
As well as strengthening relations within and between the 2 cities, the partnership – fronted by Innovate Cambridge and Unit M – will pilot new approaches for delivering inclusive growth, providing insights to other cities, the wider higher education sector community, and local and national governments in the UK and internationally.
In the UK, collaboration has traditionally been focused on geographically proximate areas, such as Manchester-Liverpool, or Edinburgh-Glasgow. This new model of hyper-connected, place-to-place partnering – similar to those developed in the USA’s Northeast Corridor, Coastal California, and China’s Greater Bay Area – combines complementary innovation capabilities to create globally competitive connected ecosystems.
Amplifying what each city can achieve independently, the model aims to drive national economic growth, responding directly to the UK government’s national industrial strategy.
Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, said: “This pioneering initiative brings together the combined strengths of Cambridge and Manchester to create something that is truly groundbreaking. By connecting our cities, we’re helping to build a more collaborative and dynamic environment in which innovative research can connect with industry, venture capital, and entrepreneurs, to drive economic growth and deliver real benefits for people and places across the UK.”
Paul Bristow, Mayor of Cambridgeshire and Peterborough, said: "This is exactly the kind of partnership working we need to fire up innovation-led growth in both our regions. I’m delighted to see it backed with new funding. By joining forces to drive the discoveries of tomorrow, we can bring in investment, support exciting new businesses, and deliver real jobs and opportunity for our communities."
Professor Duncan Ivison, President and Vice-Chancellor at the University of Manchester, said: "Our partnership with Cambridge marks a new model of collaboration between UK universities. It brings together the distinctive strengths of each of our universities and cities, connecting 2 of the great innovation ecosystems to scale up what we can achieve. This new approach to innovation accelerates the time between discovery and impact, getting ideas into the real economy and our communities even more quickly to drive inclusive growth.”
Jessica Corner, Executive Chair of Research England, said: “This investment underscores our commitment to fostering innovation and collaboration across England. By connecting the vibrant ecosystems of Cambridge and Manchester, we aim to drive significant economic growth and create a model for place-based innovation that can be replicated nationwide."
Cambridge will join forces with Manchester as part of a pioneering collaboration to harness the combined strengths of both universities and cities – and boost innovation and growth for the whole of the UK
This pioneering initiative brings together the combined strengths of Cambridge and Manchester to create something that is truly groundbreaking.
When we walk through a museum, we often marvel at the beauty, uniqueness, or exquisite craftsmanship of treasured artefacts, each with its own story to tell. But what if we could go deeper – beyond what the eye can see – to understand how these objects were made and the materials used?This question brought together scientists, conservators, curators, and archaeologists from around the world to the “Inspiring Objects” session co-organised by the International Atomic Energy Agency (IAEA) and the S
When we walk through a museum, we often marvel at the beauty, uniqueness, or exquisite craftsmanship of treasured artefacts, each with its own story to tell. But what if we could go deeper – beyond what the eye can see – to understand how these objects were made and the materials used?
This question brought together scientists, conservators, curators, and archaeologists from around the world to the “Inspiring Objects” session co-organised by the International Atomic Energy Agency (IAEA) and the Singapore Synchrotron Light Source (SSLS) at NUS. Held at the Peranakan Museum on 20 May 2025, the session showcased how advanced analytical techniques – mostly powered by particle accelerators – are changing the way we understand and preserve art, history, and heritage.
The “Inspiring Objects” session is part of a five-day workshop, jointly organised by IAEA and NUS, to provide a comprehensive overview of the latest experimental advancements and methodologies to optimise best practices in accelerator-based heritage science.
“We are here to show how our colleagues in nuclear science can contribute to the preservation of cultural and natural heritage. Today, we are not diving into scientific complexity—but instead offering a glimpse of how these technologies can help us understand the past and connect with it in new ways,” said Dr Aliz Simon, a nuclear physicist at the IAEA Division of Physical and Chemical Sciences.
Using advanced scientific techniques to preserve cultural treasures
Accelerator technology utilises electromagnetic fields to propel charged particles such as electrons or protons to high speeds and energies which can give analytical information on objects at extreme sensitivity. This advanced technology can be used to analyse layers behind an object’s surface at the nanometre level without cutting or destroying the object. Harnessing accelerator techniques pushes the boundaries of analysing aspects of artefact properties such as pigments, art techniques and materials that are invisible to the naked eye.
At the “Inspiring Objects” session, scientists from 13 countries presented case studies, each demonstrating how accelerator-based and other advanced analytical methods can uncover the fine details of how artefacts were made and the process of degradation to better guide conservation and preservation strategies.
Presenting her study on a Portrait of Tan Beng Wan (1851-1891), Dr Agnieszka Banas, Principal Research Fellow from NUS SSLS, explained how the use of optical-photothermal infrared (O-PTIR) spectroscopy, a recently developed technique that measures infrared absorption with high spatial resolution and sensitivity, offers a promising alternative for the analysis of heritage samples. Using O-PTIR, she uncovered the chemical changes behind the whitish haze seen on the bottom left side of the painting that was subtly altering the painting’s appearance with fine layers and tiny particles of degradation products that older techniques could not detect. This included a thin layer of gordaite measuring less than 2 micrometres, and crystalline zinc soaps, providing essential data that conservators can act on.
“With novel technology, we’re able to zoom in like never before—to see the exact compounds causing deterioration. That means we can give conservators the precise information they need to act. This work isn’t just about solving puzzles. We’re doing it for future generations,” said Dr Agnieszka.
Unpacking the details of another uniquely Singaporean artefact, a piece of Peranakan beadwork, was Dr Krzysztof Banas, Principal Research Fellow from NUS SSLS. Using synchrotron-based micro-X-ray Fluorescence Microscopy (microXFM), the researchers traced how the colourful glass beads of the piece deteriorated over time.
“We found that the fading colours in these beads are caused by the leaching of potassium ions, which alters the internal structure of the glass. That discovery isn’t just scientifically interesting—it gives conservators a roadmap for how to protect these objects by controlling temperature and humidity,” said Dr Krzysztof.
45,000 years of stories – told through science
Among the presentations at the “Inspiring Objects” session were studies that span centuries, continents, and cultures, all brought together with a common goal of preserving key components of cultural heritage.
The oldest artefact presented at the session was the largest rock art region in Indonesia found in the Maros-Pangkep Karst area in South Sulawesi. This area is home to more than 400 rock art cave sites with distinctive images including hand stencil art, with squatting figures dating as far back as 45,000 years ago, making it a significant archaeological site. Using a combination of advanced analytical methods, the researchers from Indonesia found that the pigment weathering of these rock art images is due to the formation of whewellite, a mineral which is associated with microbial activity, on the pigment surface.
Drawing focus onto the musical arts, researchers from Italy uncovered the acoustic secrets of the 17th–Century Guarneri cello which belonged to Andrea Guarneri (1623-1698), the founder of a family of luthiers from Cremona, a city in Italy. Using synchrotron radiation (SR) phase-contrast microtomography, an imaging technique that uses X-rays to visualise the internal structure of materials, the researchers revealed the secret recipe behind the coating of the cello that could be responsible for the quality of these instruments.
Researchers from China also demonstrated how understanding the past can pave the way for future innovations. Using ptychographic X-ray computed tomography (PtyCT) combined with X-ray fluorescence computed tomography (XRF-CT), a method for 3D-imaging and elemental mapping at high resolution, the researchers found that the glaze on a Song Dynasty (960-1279 CE) ceramic from Jianyang kilns in the Fujian province, contains the crystals of a rare form of iron oxide. This form of iron oxide remains challenging to synthesise with modern techniques. However, studying these crystals could provide insight to its unique patterns, guiding modern synthesis for applications in Materials Science.
Science for the humanities
The session provided an eye-opening, cross-disciplinary journey through time and space, united by a single idea: that every object has a story, and sometimes the story can be retold with new facts and perspectives through the lens of science.
The Guardian has launched Secure Messaging as a module within its mobile news app to provide a secure and usable method of establishing initial contact between journalists and sources.
It builds on a technology - CoverDrop –developed by Cambridge researchers and includes a wide range of security features. The code is available online and is open source, to encourage adoption by other news organisations.
The app automatically generates regular decoy messages to the Guardian to create ‘air cover
The Guardian has launched Secure Messaging as a module within its mobile news app to provide a secure and usable method of establishing initial contact between journalists and sources.
It builds on a technology - CoverDrop –developed by Cambridge researchers and includes a wide range of security features. The code is available online and is open source, to encourage adoption by other news organisations.
The app automatically generates regular decoy messages to the Guardian to create ‘air cover’ for genuine messages, even when they are passing through the cloud, preventing an adversary from finding out if any communication between a whistleblower and a journalist is taking place.
“That’s important in a world of pervasive surveillance where it has become increasingly hazardous to be a whistleblower,” said Cambridge’s Dr Daniel Hugenroth, who co-led the development of CoverDrop with Beresford.
The technology also provides digital ‘dead drops’ – like virtual bins or park benches – where messages are left for journalists to retrieve. These are just two of a suite of functions that protect a source from discovery even if their smartphone is seized or stolen.
CoverDrop encrypts outgoing messages between the source and their named contact at the news organisation to ensure no other party can read their content. For this, it relies on cryptography using digital security key pairs consisting of a public and a secret key.
The source is given the public key that instructs the existing encryption technology on their smartphone to encrypt their messages to the Guardian. This key only works one way, so it can lock – but not unlock – their messages. The only person able to decode them is the whistleblower’s named contact at the Guardian, who uses their secret key to retrieve and decode the messages left in the dead drop.
CoverDrop also pads all messages to the same length, making it harder for adversaries – whether acting on their own behalf or for an organisation or state – to distinguish real messages from decoy ones.
The system fulfils a need long identified by media organisations: providing a highly secure, yet easy-to-use, system for potential sources who want to contact them with sensitive information.
“The Guardian is committed to public-interest journalism,” said Luke Hoyland, product manager for investigations and reporting at The Guardian. “Much of this is possible thanks to first-hand accounts from witnesses to wrongdoing. We believe whistleblowing is an important part of a functioning democracy and will always do our utmost to avoid putting sources at risk. So we're delighted to have worked with the University of Cambridge on turning their groundbreaking CoverDrop research into a reality.”
The research began with workshops with UK news organisations to find out how potential sources first contacted them. The researchers learned that whistleblowers often reach out to them via platforms that are either insecure or hard to use.
Beresford said that when they started looking for a practical solution to this problem, “we realised that news organisations already run a widely available platform from which they can offer a secure, usable method of initial contact – their mobile news app.”
“When sources send messages, their confidentiality and integrity can be assured through the secure messaging protocols on their smartphone,” said Hugenroth. “CoverDrop goes one step further and also protects the communication patterns between sources and journalists by using decoy messages to provide cover and padding all messages to the same length.”
Importantly, the researchers say, users of the new CoverDrop system won’t need to install any specialist software that chews up large amounts of battery power or slows down their phones.
Its simple interface looks and works just like a typical messaging app. And there are no traces left on the device that the CoverDrop system has ever been used on that phone before.
“When you open the app,” said Beresford, “even if you’ve already set up an account on it, the CoverDrop feature will look as though you haven’t used it. Its home screen will only offer two prompts – ‘Get started’ or ‘Check your message vault’. This is because if it’s stolen, or a user is under duress, we don’t want your phone to reveal to anyone that you’ve already used it.”
The development of CoverDrop began in the years after the whistleblower Edward Snowden, a former US intelligence contractor, leaked classified documents revealing the existence of global surveillance programmes.
This showed, the researchers said, the mass surveillance infrastructure available to nation states, which has profound implications for those who wish to expose wrongdoing within companies, organisations, and government.
Work on CoverDrop was first unveiled at an international Symposium on Privacy-Enhancing Technologies in 2022 by the Cambridge researchers (who originally included the late Professor Ross Anderson, a leader in security engineering and privacy).
When they published their peer-reviewed paper on the research at the conference, it attracted interest from the Guardian which, in collaboration with the researchers, subsequently helped develop CoverDrop from an academic prototype into a fully usable technology.
“The free press fulfils an important function in a democracy,” said Beresford. “It can provide individuals with a mechanism through which they can hold powerful people and organisations to account. We’re delighted that the Guardian is the first media organisation to adopt CoverDrop and will use it to help protect their sources.”
“All the CoverDrop code will be available online and open source,” said Hugenroth. “This transparency is essential for security-critical software and allows others to audit and improve it. Open-sourcing the code also means that other news organisations, particularly those with expertise in investigative journalism, could also use it. We would be excited to see them do so.”
References:
Mansoor Ahmed-Rengers et al. ‘CoverDrop: Blowing the Whistle Through A News App.’ Paper presented at the Privacy Enhancing Technologies Symposium. 12 July 2022, Sydney, Australia. DOI: 10.2478/popets-2022-0035
Whistleblowers can contact journalists more securely thanks to a new confidential and anonymous messaging technology co-developed by University of Cambridge researchers and software engineers at the Guardian.
Thayer Hall in Harvard Yard. Photo by Dylan Goodman
Campus & Community
Judge blocks Trump order on international students
Hearing set for June 16
June 6, 2025
2 min read
A federal judge on Thursday granted the University’s motion to block an executive order by President Trump banning international students from entering the U.S. to attend Harvard.
Judge Allison Burroughs of U.S. District Court in
Judge blocks Trump order on international students
Hearing set for June 16
2 min read
A federal judge on Thursday granted the University’s motion to block an executive order by President Trump banning international students from entering the U.S. to attend Harvard.
Judge Allison Burroughs of U.S. District Court in Massachusetts had already halted the government’s effort to terminate Harvard’s participation in the Student Exchange Visa Program. Her Thursday ruling came hours after the University amended its visa lawsuit in response to the executive order, which was signed Wednesday.
The visa action and the executive order are part of an “escalating campaign of retaliation by the government in clear retribution for Harvard’s exercising its First Amendment rights to reject the government’s demands to control Harvard’s governance, curriculum, and the ‘ideology’ of its faculty and students,” the University’s complaint says.
In a message to the Harvard community Thursday, President Alan Garber called the Trump order “yet another illegal step taken by the administration to retaliate against Harvard.”
On Friday, Garber noted that the Schools are working “to ensure that our international students and scholars will be able to pursue their academic work fully.” In addition, the Harvard International Office is assisting students whose plans have been disrupted by the government’s actions.
Burroughs has set a June 16 hearing for further arguments in the case.
Three years ago, Massachusetts passed a law prohibiting the disposal of used clothing and textiles. The law aims to reduce waste and promote recycling and repurposing. While many are unaware of the nascent law, MIT students at the helm of Infinite Threads were happy to see its passage.Infinite Threads is a spinoff of the Undergraduate Association Sustainability Committee — a group of students running reuse-related events since 2013. With new leadership and a new focus, Infinite Threads went from
Three years ago, Massachusetts passed a law prohibiting the disposal of used clothing and textiles. The law aims to reduce waste and promote recycling and repurposing. While many are unaware of the nascent law, MIT students at the helm of Infinite Threads were happy to see its passage.
Infinite Threads is a spinoff of the Undergraduate Association Sustainability Committee — a group of students running reuse-related events since 2013. With new leadership and a new focus, Infinite Threads went from holding three to four popup sales a year to nine.
A group of students collects lightly used clothing from MIT community members and sells the items at deeply discounted prices at popup sales held several times each semester. Sales take place outside of the Student Center to optimize the high foot traffic in the area. Anyone can purchase items at the sales, and Infinite Threads also accepts clothing donations at the popups as well.
Administrators Cameron Dougal ’25, a recent graduate who majored in urban science and planning with computer science (Course 11-6), and Erin Hovendon, a rising senior in mechanical engineering (Course 2), led the small student-run organization for much of the year 2024-25 academic year.
“Our mission is to reduce material waste. We collect a lot of clothing at the end of the spring semester when students are moving out of their residence halls. We then sell items such as shirts, jackets, pants, and jeans at the popup sales for $2 to $6,” says Dougal, adding “we often have a lot of leftover T-shirts from residence hall events and career fairs that we give away for free. These MIT-related items demonstrate the importance of a hyperlocal reuse ecosystem. As soon as these types of items leave campus, there is a much lower chance that they will find a new home.”
Hovendon, who has an interest in sustainability and hopes to pursue a career in renewable energy, joined the group after seeing an email sent to DormSpam. “It was a great opportunity to jump into a sustainability leadership role while also helping the MIT community. We aim to offer affordable clothing options, and we get a lot of positive feedback about the thrift popups — I love hearing from students that they got clothing items they now wear frequently from one of our sales,” says Hovendon.
“Any money made at the popups is used to pay the student workers and to rent the U-Haul we use to bring the clothing we store at MIT’s Furniture Exchange warehouse to the Student Center. Our goal is simple: we want to keep clothing out of landfills, which in return helps the planet,” says Dougal.
Studies show that a pair of cotton denim jeans can take up to a year to decompose, while jeans or items of clothing made with polyester can take 40-200 years to decompose. According to the Environmental Protection Agency, blue jeans account for 5 percent of landfill space. Infinite Threads saves clothing items from ending up in landfills.
Hovendon agrees. “We don’t make a lot of money at the sales — it’s not our goal. Our goal is to help the environment. We received some seed funding from the MIT Women's League, the Office of Sustainability, and the MIT Fabric Innovation Hub.”
Infinite Threads also collaborates with the MIT Office of Sustainability (MITOS) to bring awareness to their work.
“Infinite Threads is a fantastic model for how students can directly take action, empower individuals, and leverage the collective community to design out clothing waste and climate impacts through the reuse culture. MIT students, like Cameron and Erin, are well-positioned to tackle sustainability challenges on campus and out in the world as they bring a willingness to solve complex challenges, experiment with many solutions, and grapple with operational realities,” says Brian Goldberg, assistant director of MITOS.
In 2024-25, the club sold over 1,000 clothing items. Any clothing that does not sell at the thrift shop is given to Helpsy, an organization that helps keep clothing out of the trash and landfills while also creating jobs. Dougal and Hovendon say they have diverted about 750 pounds of textiles to Helpsy in 2024-25 alone.
Lauren Higgins, a rising senior majoring in political science who took over managing Infinite Threads from Dougal earlier this year, says, “I originally joined as one of the staff for Infinite Threads, and I love being able to help out with waste reduction and sustainability efforts on campus. It's been great to see our impact, and I hope we're able to continue that this upcoming year.”
Cameron Dougal (center) helps a customer at an Infinite Threads popup sale. “Our goal is simple,” says Dougal. “We want to keep clothing out of landfills, which in return helps the planet.”
Every day, students at MIT come together to learn, work, play, and form communities large and small. Community Spotlight stories are intended to provide a glimpse inside the Institute's classrooms, labs, and other gathering places to show how people come together, and how they share what it means to be part of MIT's community of scholars. A doctoral seminar in economics is not a class for dabblers. It is, however, the surest path to understanding the intense complexity of economic and political
Every day, students at MIT come together to learn, work, play, and form communities large and small. Community Spotlight stories are intended to provide a glimpse inside the Institute's classrooms, labs, and other gathering places to show how people come together, and how they share what it means to be part of MIT's community of scholars.
A doctoral seminar in economics is not a class for dabblers. It is, however, the surest path to understanding the intense complexity of economic and political forces that play out in daily life. “A lot of economics is about prices, and quantities, and things that take place once you take the market system as given,” says Daron Acemoglu, who taught economics class 14.773 (Political Economy: Institutions and Development) with Jacob Moscona this spring. “But which market system? What laws? Where do these come from? There’s nothing preordained about the way we organize the economy. So how did we end up here?”
“This is a PhD-level course, so we’re going to move fast,” Acemoglu says on the first day of class in early February. There are about two dozen students in a seminar room overlooking the Charles River. Some are sharply dressed business-school types in skirts or slacks and carefully coiffed hair; others are in jeans, T-shirts, and hoodies, and don’t seem to own combs. Acemoglu, with a salt-and-pepper goatee, small glasses on a round face, rumpled pants, and a tan blazer over a black turtleneck, lands somewhere in between.
The classroom is designed with the lecturer as the focal point of three concentric half-circles of white tables, with seating bolted to the floor at each tier. It feels intimate, utilitarian, a little cramped; the white walls and tabletops pick up the glare from the frozen river and snow-covered ground to fill the room with brilliant light. “We’re going to start with a broad overview to give you a sense of what I’m going to take for granted later. But I want you to ask questions,” he says, and turns to his slides.
First up is a map of the world showing countries color-coded according to per-capita incomes. “Why the variations?” Acemoglu asks. “There are geographical, cultural, historical, and institutional explanations. Let’s start with the institutions — a term we could spend the rest of the semester defining,” he adds drily. He offers a slightly modified definition from American economist Douglass North: “institutions are humanly devised constraints that shape human interaction and enable incentives.”
A hand shoots up: “What are markets in this context?”
Acemoglu pauses and puts his finger to his chin. “That’s a great question … I don’t know.” He leaves his finger on his chin. “But how they’re structured is a question of institutions. Hold on to that thought. We’ll come back to it.” Another hand shoots up as he turns back to his slides. “Yes!” he steps quickly to stand in front of the student who has her hand up.
“Are social norms the institutions of a society?” she asks.
“The relationships between norms and institutions are extremely complex. We’ll get there. Bear with me,” he says. There is a crackle in the air that wasn’t there before. Acemoglu has barely started his lecture, and he seems as energized by the questions as he is eager to move forward. He dives back into his slides: “Colonialism has shaped two thirds of the world’s economy—”
He moves on to “inclusive” and “extractive” economies (“If you don’t like these terms,” he says, “you can come up with your own, but proliferation of terms can become its own problem”), de facto and de jure political power (“Elon Musk and I both have one vote, but we do not have the same amount of power”), the relative population densities in colonized territories, settler mortality and disease, the Boer War, Franklin Delano Roosevelt’s battle with the U.S. Supreme Court, Argentinian political history, and more.
Acemoglu’s lecture pace feels roughly equivalent to sitting in the Millennium Falcon as it jumps to hyperspeed. Working with more than 50 slides, he often skips forward or backward several at a time to illustrate a point. And the questions keep coming (“What about countries like India that had strong pre-contact institutions?”). With each one, he walks toward the student asking and comes to a complete, fully attentive stop. Often, he will connect a new question with one posed earlier and flip back through his slides for an illustration. Then, without a pause, he picks up exactly where he left off and resumes the journey — at speed.
“In most economics classes, you study a specific incident or a narrow set of questions,” says Netanel Ben-Porath, an Israeli doctoral student from Northwestern University who is sitting in on 14.773. “In this class, we take on big questions, and Professor Acemoglu uses different examples from different places and periods of history. I am amazed at how he organizes this into a coherent class,” he adds. “It gives you access to incredibly deep theoretical thinking and truly vast knowledge at the same time.”
It starts with history
The students in Acemoglu’s classes, and plenty of other people, have encountered many of his ideas already. He has published, with co-authors James A. Robinson and Simon Johnson PhD ’89 of MIT Sloan School of Management, some of the most influential books of this century, including “Why Nations Fail,” “The Narrow Corridor,” and “Power and Progress.” Devoted to the intersections of politics, economics, history, and technology, the work has been translated into more than 30 languages and been wildly popular worldwide. (The books, and others, along with the trio’s hundreds of scholarly articles, led to a Nobel Prize in economic sciences last fall.)
Acemoglu’s books also offer a glimpse into what the students in 14.773 are learning — and how they’re learning it. For example, in the opening chapter of “Why Nations Fail,” Acemoglu and coauthor James A. Robinson observe that people living on either side of the Arizona-Sonora border town of Nogales experience deeply contrasting economic circumstances, even though they are separated by little more than a fence. By way of explanation, the book launches into briskly paced histories of North and South America — from the conquistadors and the Puritans to the maquiladores and NAFTA. (Later on, the book does similar rundowns on North and South Korea and the two halves of the Kuba Kingdom in the Democratic Republic of Congo to illustrate related concepts.) The depth and variety of historical detail in Acemoglu’s books can be breathtaking, but the delivery is steady and engaging. In print, he has time to develop nuanced arguments with carefully chosen details and nimbly orchestrated asides.
For the students in his PhD seminar, however, time is short and the demands are much steeper. Each meeting of 14.773 is one leg in a 14-week race through the broad domain of political economy. Individual classes are built around a particular issue or dynamic — labor coercion, slavery and colonialism, voting and constitutions, conflict and regime change, elite networks and corruption, the environment, and others. Class sessions typically start off with a standard lecture, but things get more complicated after the first 15 or 20 minutes.
Acemoglu makes them do the math.
Making models
By the end of February, enrollment in 14.773 had thinned out a bit, but the pace of the class did not slacken as the semester went on. On another bright, frigid day, Acemoglu walks in, sets down his paper cup of coffee, plugs in an iPad for his slides, and launches into the topic of weak states and state-building. “By weak states, we mean those that lack capacity,” he says. “And by capacity, we mean their ability to do what they are meant to do — what people intend for the public good. Also,” he adds, “you can define state capacity as the ability of the state to impose its will on the people who live in it.”
Working with examples from postcolonial sub-Saharan Africa and some concepts borrowed from the German sociologist Max Weber, Acemoglu defines the four main elements of state capacity: a “monopoly on legitimized violence,” the ability to levy taxes and regulate the economy, providing reliable infrastructure, and the autonomy of their bureaucracies (the last, he notes, “is related because of the need for state institutions to be somewhat autonomous from political power”). He rounds out his introduction with a question: “Why do weak states remain weak when their weakness is not economically productive?”
Then he changes gears. Starting with the examples he has just presented, Acemoglu begins to spell out a mathematical model that puts the forces of state capacity and other variables into calculable relationships with each other. The content on his slides changes from bullets, tables, and line charts to equations, regressions, and specialized mathematical functions and notations. Many of the variables come directly from the lectures (“Gt denotes government spending on public goods,” he points out), that are then combined with other social dynamics, economic principles, and mathematical functions.
Acemoglu’s speaking pace actually seems to pick up as he presents the model and describes the assumptions driving its different elements. “To make this easy,” he notes early on, “we’ll assume a finite set of states, and each individual uses a discount factor to determine relative payoffs.” As he rolls on, Acemoglu sometimes pauses to explain how he is simplifying a phenomena, or he sidetracks to shorthand a principle: “As you can see,” he says, pointing to one variable in a formula that stretches all the way across a slide, “that’s a relatively innocuous simplification. You could put any constant here you want; it doesn’t mean anything.”
The students are quiet as he walks through the model. Some take notes, others follow along with his slides on their laptops. When Acemoglu pauses to sip of his coffee, a few lean in with questions. They are much more interested in the interpretations and assumptions driving the model than they are in the calculations, which are as obvious to them as they are to Acemoglu.
“In this model, the tax rates are set once, after the state has decided its investment levels,” one asks. “Won’t people adjust their output?”
“The state should always assume citizens hide a portion of their output from taxation, but this concealment has costs,” Acemoglu says. “This should be a part of the state’s model. But you’re right about the sequence,” he adds. “Reality is a little bit more complicated.”
“The models are like metaphors for helping you see the world,” says Charlotte Siegmann, a second-year doctoral student in economics, after class. “I enjoy finding the boundaries and conditions for the metaphor. The models can sometimes help us see more patterns — like how things fit together.”
“It’s very difficult to interpret data,” says Santiago Torres, a first-year student pursuing a double-PhD in economics and statistics. “You can always get the computer to spit out a number. But making sense of that number, knowing what else to expect, trying to design policies — all of this requires models.”
A mix of the technical and the frontiers
Students enroll in 14.773 for lots of reasons. Initially, some show up simply because of Acemoglu’s star power (a visiting student from a nearby university asked him to autograph her copy of “Power and Progress” after the first class). Others, mostly postdocs, are visiting from nearby organizations like the National Bureau of Economic Research. By the time spring break has come and gone, however, it’s really down to the economists.
Torres, the first-year student, hasn’t settled on a research direction yet. He is taking the class to sharpen his technical acumen and to get a bird’s-eye view of his discipline. Originally from Bogota, Colombia, he worked for two years in a predoctoral role for James R. Robinson (Acemoglu’s co-Nobelist at the University of Chicago) before coming to MIT. “I already know most of James’ questions,” he says. “I want to go back to the questions that got me into economics. This is a very nice class to take early on because of the scope,” he adds. “Once you make your choices, it can get very narrow.”
Natanel Ben-Porath, the visitor from Northwestern, points specifically to Acemoglu’s models as a motivator for his own learning. “We make arguments in economics by formalizing models,” he says. “And there is always a trade-off between how realistic you can make the model and how complicated it is. I am really learning from this class how to do work that people can follow, but that remains realistic.”
Even though he hasn’t started a PhD program yet, Christian Vogt is clearer on his goal: “I want to go into this field.” Currently a predoctoral researcher in the economics department with Acemoglu and David Autor, Vogt sees political economy as “taking a step back — instead of studying transactions that solve political problems, it looks at problems before the transaction is agreed on. I am very interested in understanding problems like how organized labor can leverage its power,” he adds. “These kinds of problems don’t get solved by just putting more information out there.”
For Acemoglu, all of these outcomes make sense. “A PhD course should be egging you on to become a researcher,” he says. “I’m trying to mix some of the technical materials and some of the vision and ideas about where the frontiers are. For some, this class will never translate into anything,” he adds. “Some will be motivated to do empirical work. Some will be triggered to do more theory. They’re learning and they're experimenting and they’re generating their own ideas. That’s what we’re here for.”
Efstathiou, Emeritus Professor of Astrophysics (1909) at Cambridge’s Institute of Astronomy, shares the prize with Professor John Richard Bond from the Canadian Institute for Theoretical Astrophysics and the University of Toronto.
They were recognised for their pioneering research in cosmology, in particular for their studies of fluctuations in the cosmic microwave background. Their predictions have been verified by an armada of ground-, balloon- and space-based instruments, leading to precise
Efstathiou, Emeritus Professor of Astrophysics (1909) at Cambridge’s Institute of Astronomy, shares the prize with Professor John Richard Bond from the Canadian Institute for Theoretical Astrophysics and the University of Toronto.
They were recognised for their pioneering research in cosmology, in particular for their studies of fluctuations in the cosmic microwave background. Their predictions have been verified by an armada of ground-, balloon- and space-based instruments, leading to precise determinations of the age, geometry, and mass-energy content of the universe.
Cosmology has undergone a revolution in the past two decades, driven mainly by increasingly precise measurements of the angular power spectrum of fluctuations in the temperature and polarisation fields of the cosmic microwave background, a relic of the early universe, most notably by NASA’s Wilkinson Microwave Anisotropy Probe spacecraft (2001–2010) and the European Space Agency’s Planck spacecraft (2009–2013).
These fluctuations are small — the strength of the background radiation is the same in all directions to better than 0.01% and it is only slightly polarised — but they offer a glimpse of the universe when it was very young, a test of many aspects of fundamental physics, insights into the nature of dark matter and dark energy, and measurements of many fundamental cosmological parameters with accuracies unimaginable to cosmologists a few decades ago.
Although many researchers contributed to the development of the theoretical framework that governs the behaviour of the cosmic microwave background, Bond and Efstathiou emphasised the importance of the background as a cosmological probe and took the crucial step of making precise predictions for what can be learned from specific models of the history and the composition of the mass and energy in the universe.
Modern numerical codes used to interpret the experimental results are based almost entirely on the physics developed by Bond and Efstathiou. Their work exemplifies one of the rare cases in astrophysics where later experimental studies accurately confirmed unambiguous, powerful theoretical predictions.
The interpretation of these experiments through Bond and Efstathiou’s theoretical models shows that the spatial geometry of the observable universe is nearly flat, and yields the age of the universe with a precision of 0.15%, the rate of expansion of the universe with a precision of 0.5%, the fraction of the critical density arising from dark energy to better than 1%, and so on. The measurements also strongly constrain theories of the early universe that might have provided the initial “seed” for all the cosmic structure we see today, and the nature of the dark matter and dark energy that dominate the mass-energy content of the universe.
Both Bond and Efstathiou have worked closely with experimentalists to bring their predictions to the test: they have been heavily involved in the analysis of cosmic microwave background data arising from a wide variety of experiments of growing sophistication and accuracy.
George Efstathiou received his BA in Physics from the University of Oxford and PhD in Astronomy from Durham University. He has held postdoctoral fellowships at the University of California, Berkeley, USA and the University of Cambridge. He was Savilian Professor of Astrophysics at Oxford, where he served as Head of Astrophysics until 1994. He returned to Cambridge in 1997 as Professor of Astrophysics, where he also served as Director of the Institute of Astronomy and the first Director of the Kavli Institute for Cosmology. He received the 2022 Gold Medal of the Royal Astronomical Society. He is a Fellow of the Royal Society of London and the Royal Astronomical Society, UK. He is a Fellow of King’s College, Cambridge.
At a time when the U.S. Department of Defense increasingly grapples with emerging technologies and their implications for national security, Erik Lin-Greenberg ’09, SM ’09 occupies a rare position at the intersection of theory and practice.The MIT political scientist and lieutenant colonel in the U.S. Air Force Reserve recently assumed command of the 820th Intelligence Squadron at the Offutt Air Force Base near Omaha, Nebraska, where he now leads dozens of officers and enlisted personnel. He doe
At a time when the U.S. Department of Defense increasingly grapples with emerging technologies and their implications for national security, Erik Lin-Greenberg ’09, SM ’09 occupies a rare position at the intersection of theory and practice.
The MIT political scientist and lieutenant colonel in the U.S. Air Force Reserve recently assumed command of the 820th Intelligence Squadron at the Offutt Air Force Base near Omaha, Nebraska, where he now leads dozens of officers and enlisted personnel. He does so while maintaining his full-time role as the Leo Marx Career Development Associate Professor in the History and Culture of Science and Technology at MIT, with areas of focus including emerging technologies, crisis escalation, and security.
Combining these two worlds — the military and the academic — has been natural for Lin-Greenberg, and he anticipates that his duties in both will continue to amplify each other.
“I’m honored to have the privilege of serving as a squadron commander,” Lin-Greenberg says. “I’ve learned a lot about leadership as a professor, an airman, and as a reservist, and look forward to serving the airmen in my squadron.”
From tragedy to service
Lin-Greenberg’s commitment to service was born from tragedy, when thousands of civilians lost their lives in the terrorist attacks of September 11, 2001. “I grew up outside New York City,” he says, “and saw fighter jets flying overhead.”
Soon thereafter, Lin-Greenberg decided to heed what he felt was a call to serve the nation. As an undergraduate at MIT, he began his military career as a member of the U.S. Air Force Reserve Officer Training Corps (ROTC) Detachment 365, which comprises students from MIT, Harvard University, Tufts University, and Wellesley College.
Upon graduating in 2009 with both a bachelor’s and an master’s in political science, he joined the Air Force, where he was commissioned as an intelligence officer. He rose through the ranks, becoming a flight commander at California’s Beale Air Force Base. “I really enjoyed being a member of the Air Force,” he says, “so I transferred to the Reserve when I started my PhD program.”
The scholar-warrior
Lin-Greenberg went on to complete a PhD in political science at Columbia University in 2019. Following fellowships at Stanford University and the University of Pennsylvania, he joined the MIT Department of Political Science as an assistant professor in 2020.
Having deployed to Qatar and Afghanistan and worked with drones early in his Air Force career, Lin-Greenberg says his experiences and immersion in operations have motivated much of his academic research. “Drones are tools of war and statecraft,” he notes, and his forthcoming book explores their use in crises and conflicts since the Cold War.
“My research examines how new technologies impact the use of force and decision-making during interstate conflicts,” Lin-Greenberg says. When conducting academic inquiries, he finds himself asking: “Would my boss’s boss care about the questions I’m asking?”
Lin-Greenberg also co-leads the MIT Security Studies Program’s Wargaming Lab, a research group that investigates conflict through war-gaming and helps develop best practices for academic war-gaming. “War games are data-gathering tools,” he says, “and the lab allows me to integrate academic tools, like experiments, into war games, which have traditionally been used by militaries.”
Leading in the classroom and on the base
Lin-Greenberg understands and appreciates the responsibilities he’s earned and takes a deliberate and careful approach to how he leads his reserve unit and how he advises his students. The personnel he leads in the Air Force in many ways resemble his MIT students, Lin-Greenberg believes. “They are innovative and dedicated to their work,” he says. His role as a leader in the armed forces helps him develop strategies to effectively advise his students while creating mentorship opportunities in all of his professional roles.
When advising students, Lin-Greenberg explains that he leverages lessons about giving tough feedback and motivating people, lessons he learned from Air Force mentors. In his Air Force role, he tries to incorporate insights from international relations and security studies scholarship to explain the strategic environment to junior personnel.
Lin-Greenberg believes he landed in his positions in the Air Force and at MIT because he took advantage of opportunities when they arrived, and he advises others to do the same. “Everything happens for a reason,” he says.
Erik Lin-Greenberg's role as a leader in the U.S. armed forces helps him develop strategies to effectively advise his students at MIT, while creating mentorship opportunities across both his leadership roles.
When navigating a place that we’re only somewhat familiar with, we often rely on unique landmarks to help make our way. However, if we’re looking for an office in a brick building, and there are many brick buildings along our route, we might use a rule like looking for the second building on a street, rather than relying on distinguishing the building itself.Until that ambiguity is resolved, we must hold in mind that there are multiple possibilities (or hypotheses) for where we are in relation t
When navigating a place that we’re only somewhat familiar with, we often rely on unique landmarks to help make our way. However, if we’re looking for an office in a brick building, and there are many brick buildings along our route, we might use a rule like looking for the second building on a street, rather than relying on distinguishing the building itself.
Until that ambiguity is resolved, we must hold in mind that there are multiple possibilities (or hypotheses) for where we are in relation to our destination. In a study of mice, MIT neuroscientists have now discovered that these hypotheses are explicitly represented in the brain by distinct neural activity patterns.
This is the first time that neural activity patterns that encode simultaneous hypotheses have been seen in the brain. The researchers found that these representations, which were observed in the brain’s retrosplenial cortex (RSC), not only encode hypotheses but also could be used by the animals to choose the correct way to go.
“As far as we know, no one has shown in a complex reasoning task that there’s an area in association cortex that holds two hypotheses in mind and then uses one of those hypotheses, once it gets more information, to actually complete the task,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
Jakob Voigts PhD ’17, a former postdoc in Harnett’s lab and now a group leader at the Howard Hughes Medical Institute Janelia Research Campus, is the lead author of the paper, which appears today in Nature Neuroscience.
Ambiguous landmarks
The RSC receives input from the visual cortex, the hippocampal formation, and the anterior thalamus, which it integrates to help guide navigation.
In a 2020 paper, Harnett’s lab found that the RSC uses both visual and spatial information to encode landmarks used for navigation. In that study, the researchers showed that neurons in the RSC of mice integrate visual information about the surrounding environment with spatial feedback of the mice’s own position along a track, allowing them to learn where to find a reward based on landmarks that they saw.
In their new study, the researchers wanted to delve further into how the RSC uses spatial information and situational context to guide navigational decision-making. To do that, the researchers devised a much more complicated navigational task than typically used in mouse studies. They set up a large, round arena, with 16 small openings, or ports, along the side walls. One of these openings would give the mice a reward when they stuck their nose through it. In the first set of experiments, the researchers trained the mice to go to different reward ports indicated by dots of light on the floor that were only visible when the mice get close to them.
Once the mice learned to perform this relatively simple task, the researchers added a second dot. The two dots were always the same distance from each other and from the center of the arena. But now the mice had to go to the port by the counterclockwise dot to get the reward. Because the dots were identical and only became visible at close distances, the mice could never see both dots at once and could not immediately determine which dot was which.
To solve this task, mice therefore had to remember where they expected a dot to show up, integrating their own body position, the direction they were heading, and path they took to figure out which landmark is which. By measuring RSC activity as the mice approached the ambiguous landmarks, the researchers could determine whether the RSC encodes hypotheses about spatial location. The task was carefully designed to require the mice to use the visual landmarks to obtain rewards, instead of other strategies like odor cues or dead reckoning.
“What is important about the behavior in this case is that mice need to remember something and then use that to interpret future input,” says Voigts, who worked on this study while a postdoc in Harnett’s lab. “It’s not just remembering something, but remembering it in such a way that you can act on it.”
The researchers found that as the mice accumulated information about which dot might be which, populations of RSC neurons displayed distinct activity patterns for incomplete information. Each of these patterns appears to correspond to a hypothesis about where the mouse thought it was with respect to the reward.
When the mice get close enough to figure out which dot was indicating the reward port, these patterns collapsed into the one that represents the correct hypothesis. The findings suggest that these patterns not only passively store hypotheses, they can also be used to compute how to get to the correct location, the researchers say.
“We show that RSC has the required information for using this short-term memory to distinguish the ambiguous landmarks. And we show that this type of hypothesis is encoded and processed in a way that allows the RSC to use it to solve the computation,” Voigts says.
Interconnected neurons
When analyzing their initial results, Harnett and Voigts consulted with MIT Professor Ila Fiete, who had run a study about 10 years ago using an artificial neural network to perform a similar navigation task.
That study, previously published on bioRxiv, showed that the neural network displayed activity patterns that were conceptually similar to those seen in the animal studies run by Harnett’s lab. The neurons of the artificial neural network ended up forming highly interconnected low-dimensional networks, like the neurons of the RSC.
“That interconnectivity seems, in ways that we still don’t understand, to be key to how these dynamics emerge and how they’re controlled. And it’s a key feature of how the RSC holds these two hypotheses in mind at the same time,” Harnett says.
In his lab at Janelia, Voigts now plans to investigate how other brain areas involved in navigation, such as the prefrontal cortex, are engaged as mice explore and forage in a more naturalistic way, without being trained on a specific task.
“We’re looking into whether there are general principles by which tasks are learned,” Voigts says. “We have a lot of knowledge in neuroscience about how brains operate once the animal has learned a task, but in comparison we know extremely little about how mice learn tasks or what they choose to learn when given freedom to behave naturally.”
The research was funded, in part, by the National Institutes of Health, a Simons Center for the Social Brain at MIT postdoctoral fellowship, the National Institute of General Medical Sciences, and the Center for Brains, Minds, and Machines at MIT, funded by the National Science Foundation.
New research finds a brain region critical for navigation uses distinct neural activity patterns to encode multiple hypotheses that help distinguish between ambiguous landmarks.
Animators could create more realistic bouncy, stretchy, and squishy characters for movies and video games thanks to a new simulation method developed by researchers at MIT.Their approach allows animators to simulate rubbery and elastic materials in a way that preserves the physical properties of the material and avoids pitfalls like instability.The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many ex
Animators could create more realistic bouncy, stretchy, and squishy characters for movies and video games thanks to a new simulation method developed by researchers at MIT.
Their approach allows animators to simulate rubbery and elastic materials in a way that preserves the physical properties of the material and avoids pitfalls like instability.
The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many existing simulation techniques can produce elastic animations that become erratic or sluggish or can even break down entirely.
To achieve this improvement, the MIT researchers uncovered a hidden mathematical structure in equations that capture how elastic materials deform on a computer. By leveraging this property, known as convexity, they designed a method that consistently produces accurate, physically faithful simulations.
“The way animations look often depends on how accurately we simulate the physics of the problem,” says Leticia Mattos Da Silva, an MIT graduate student and lead author of a paper on this research. “Our method aims to stay true to physical laws while giving more control and stability to animation artists.”
Beyond 3D animation, the researchers also see potential future uses in the design of real elastic objects, such as flexible shoes, garments, or toys. The method could be extended to help engineers explore how stretchy objects will perform before they are built.
She is joined on the paper by Silvia Sellán, an assistant professor of computer science at Columbia University; Natalia Pacheco-Tallaj, an MIT graduate student; and senior author Justin Solomon, an associate professor in the MIT Department of Electrical Engineering and Computer Science and leader of the Geometric Data Processing Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the SIGGRAPH conference.
Truthful to physics
If you drop a rubber ball on a wooden floor, it bounces back up. Viewers expect to see the same behavior in an animated world, but recreating such dynamics convincingly can be difficult. Many existing techniques simulate elastic objects using fast solvers that trade physical realism for speed, which can result in excessive energy loss or even simulation failure.
More accurate approaches, including a class of techniques called variational integrators, preserve the physical properties of the object, such as its total energy or momentum, and, in this way, mimic real-world behavior more closely. But these methods are often unreliable because they depend on complex equations that are hard to solve efficiently.
The MIT researchers tackled this problem by rewriting the equations of variational integrators to reveal a hidden convex structure. They broke the deformation of elastic materials into a stretch component and a rotation component, and found that the stretch portion forms a convex problem that is well-suited for stable optimization algorithms.
“If you just look at the original formulation, it seems fully non-convex. But because we can rewrite it so that is convex in at least some of its variables, we can inherit some advantages of convex optimization algorithms,” she says.
These convex optimization algorithms, when applied under the right conditions, come with guarantees of convergence, meaning they are more likely to find the correct answer to the problem. This generates more stable simulations over time, avoiding issues like a bouncing rubber ball losing too much energy or exploding mid-animation.
One of the biggest challenges the researchers faced was reinterpreting the formulation so they could extract that hidden convexity. Some other works explored hidden convexity in static problems, but it was not clear whether the structures remained solid for dynamic problems like simulating elastic objects in motion, Mattos Da Silva says.
Stability and efficiency
In experiments, their solver was able to simulate a wide range of elastic behavior, from bouncing shapes to squishy characters, with preservation of important physical properties and stability over long periods of time. Other simulation methods quickly ran into trouble: Some became unstable, causing erratic behavior, while others showed visible damping.
“Because our method demonstrates more stability, it can give animators more reliability and confidence when simulating anything elastic, whether it’s something from the real world or even something completely imaginary,” she says.
While the solver is not as fast as some simulation tools that prioritize speed over accuracy, it avoids many of the trade-offs those methods make. Compared to other physics-based approaches, it also avoids the need for complex, nonlinear solvers that can be sensitive and prone to failure.
In the future, the researchers want to explore techniques to further reduce computational cost. In addition, they want to explore applications of this technique in fabrication and engineering, where reliable simulations of elastic materials could support the design of real-world objects, like garments and toys.
“We were able to revive an old class of integrators in our work. My guess is there are other examples where researchers can revisit a problem to find a hidden convexity structure that could offer a lot of advantages,” she says.
This research is funded, in part, by a MathWorks Engineering Fellowship, the Army Research Office, the National Science Foundation, the CSAIL Future of Data Program, the MIT-IBM Watson AI Laboratory, the Wistron Corporation, and the Toyota-CSAIL Joint Research Center.
MIT researchers developed a computationally efficient method that could enable artists to design realistic simulations of elastic objects, like bouncy or squishy characters, for animated movies or video games.
Academic research groups and startups are essential drivers of scientific progress. But some projects, like the Hubble Space Telescope or the Human Genome Project, are too big for any one academic lab or loose consortium. They’re also not immediately profitable enough for industry to take on.That’s the gap researchers at MIT were trying to fill when they created the concept of focused research organizations, or FROs. They describe a FRO as a new type of entity, often philanthropically funded, th
Academic research groups and startups are essential drivers of scientific progress. But some projects, like the Hubble Space Telescope or the Human Genome Project, are too big for any one academic lab or loose consortium. They’re also not immediately profitable enough for industry to take on.
That’s the gap researchers at MIT were trying to fill when they created the concept of focused research organizations, or FROs. They describe a FRO as a new type of entity, often philanthropically funded, that undertakes large research efforts using tightly coordinated teams to create a public good that accelerates scientific progress.
The original idea for focused research organizations came out of talks among researchers, most of whom were working to map the brain in MIT Professor Ed Boyden’s lab. After they began publishing their ideas, however, the researchers realized FROs could be a powerful tool to unlock scientific advances across many other applications.
“We were quite pleasantly surprised by the range of fields where we see FRO-shaped problems,” says Adam Marblestone, a former MIT research scientist who co-founded the nonprofit Convergent Research to help launch FROs in 2021. “Convergent has FRO proposals from climate, materials science, chemistry, biology — we even have launched a FRO on software for math. You wouldn’t expect math to be something with a large-scale technological research bottleneck, but it turns out even there, we found a software engineering bottleneck that needed to be solved.”
Marblestone helped formulate the idea for focused research organizations at MIT with a group including Andrew Payne SM ’17, PhD ’21 and Sam Rodriques PhD ’19, who were PhD students in Boyden’s lab at the time. Since then, the FRO concept has caught on. Convergent has helped attract philanthropic funding for FROs working to decode the immune system, identify the unintended targets of approved drugs, and understand the impacts of carbon dioxide removal in our oceans.
In total, Convergent has supported the creation of 10 FROs since its founding in 2021. Many of those groups have already released important tools for better understanding our world — and their leaders believe the best is yet to come.
“We’re starting to see these first open-source tools released in important areas,” Marblestone says. “We’re seeing the first concrete evidence that FROs are effective, because no other entity could have released these tools, and I think 2025 is going to be a significant year in terms of our newer FROs putting out new datasets and tools.”
A new model
Marblestone joined Boyden’s lab in 2014 as a research scientist after completing his PhD at Harvard University. He also worked in a new position called director of scientific architecting at the MIT Media Lab, which Boyden helped create, through which he tried to organize individual research efforts into larger projects. His own research focused on overcoming the challenges of measuring brain activity across large scales.
Marblestone discussed this and other large-scale neuroscience problems with Payne and Rodriques, and the researchers began thinking about gaps in scientific funding more broadly.
“The combination of myself, Sam, Andrew, Ed, and others’ experiences trying to start various large brain-mapping projects convinced us of the gap in support for medium-sized science and engineering teams with startup-inspired structures, built for the nonprofit purpose of building scientific infrastructure,” Marblestone says.
Through MIT, the researchers also connected with Tom Kalil, who was at the time chief innovation officer at Schmidt Futures, a philanthropic initiative of Eric and Wendy Schmidt. Rodriques wrote about the concept of a focused research organization as the last chapter of his PhD thesis in 2019.
“Ed always encouraged us to dream very, very big,” Rodriques says. “We were always trying to think about the hardest problems in biology and how to tackle them. My thesis basically ended with me explaining why we needed a new structure that is like a company, but nonprofit and dedicated to science.”
As part of a fellowship with the Federation of American Scientists in 2020, and working with Kalil, Marblestone interviewed scientists in dozens of fields outside of neuroscience and learned that the funding gap existed across disciplines.
When Rodriques and Marblestone published an essay about their findings, it helped attract philanthropic funding, which Marblestone, Kalil, and co-founder Anastasia Gamick used to launch Convergent Research, a nonprofit science studio for launching FROs.
“I see Ed’s lab as a melting pot where myself, Ed, Sam, and others worked on articulating a need and identifying specific projects that might make sense as FROs,” Marblestone says. “All those ideas later got crystallized when we created Convergent Research.”
In 2021, Convergent helped launch the first FROs: E11 Bio, which is led by Payne and committed to developing tools to understand how the brain is wired, and Cultivarium, a FRO making microorganisms more accessible for work in synthetic biology.
“From our brain mapping work we started asking the question, ‘Are there other projects that look like this that aren’t getting funded?’” Payne says. “We realized there was a gap in the research ecosystem, where some of these interdisciplinary, team science projects were being systematically overlooked. We knew a lot of amazing things would come out of getting those projects funded.”
Tools to advance science
Early progress from the first focused research organizations has strengthened Marblestone’s conviction that they’re filling a gap.
[C]Worthy is the FRO building tools to ensure safe, ocean-based carbon dioxide removal. It recently released an interactive map of alkaline activity to improve our understanding of one method for sequestering carbon known as ocean alkalinity enhancement. Last year, a math FRO, Lean, released a programming language and proof assistant that was used by Google’s DeepMind AI lab to solve problems in the International Mathematical Olympiad, achieving the same level as a silver medalist in the competition for the first time. The synthetic biology FRO Cultivarium, in turn, has already released software that can predict growth conditions for microbes based on their genome.
Last year, E11 Bio previewed a new method for mapping the brain called PRISM, which it has used to map out a portion of the mouse hippocampus. It will be making the data and mapping tool available to all researchers in coming months.
“A lot of this early work has proven you can put a really talented team together and move fast to go from zero to one,” Payne says. “The next phase is proving FROs can continue to build on that momentum and develop even more datasets and tools, establish even bigger collaborations, and scale their impact.”
Payne credits Boyden for fostering an ecosystem where researchers could think about problems beyond their narrow area of study.
“Ed’s lab was a really intellectually stimulating, collaborative environment,” Payne says. “He trains his students to think about impact first and work backward. It was a bunch of people thinking about how they were going to change the world, and that made it a particularly good place to develop the FRO idea.”
Marblestone says supporting FROs has been the highest-impact thing he’s been able to do in his career. Still, he believes the success of FROs should be judged over closer to 10-year periods and will depend on not just the tools they produce but also whether they spin out companies, partner with other institutes, and create larger, long-lasting initiatives to deploy what they built.
“We were initially worried people wouldn’t be willing to join these organizations because it doesn’t offer tenure and it doesn’t offer equity in a startup,” Marblestone says. “But we’ve been able to recruit excellent leaders, scientists, engineers, and others to create highly motivated teams. That’s good evidence this is working. As we get strong projects and good results, I hope it will create this flywheel where it becomes easier to fund these ideas, more scientists will come up with them, and I think we’re starting to get there.”
The Hubble Space Telescope, the Human Genome Project, and the Large Hadron Collider/CERN were large-scale engineering projects that inspired MIT researchers to create Focused Research Organizations.
As geopolitical upheaval, technological disruption, and demographic shifts signal an impending economic slowdown for most of the world, social mobility is gaining recognition for its bearing on a society’s ability to tackle these challenges.This characteristic of societies describes how closely the social status of individuals is linked to that of their parents, with social immobility resulting in barriers that prevent talented and hardworking people from realising their full potential. On the o
As geopolitical upheaval, technological disruption, and demographic shifts signal an impending economic slowdown for most of the world, social mobility is gaining recognition for its bearing on a society’s ability to tackle these challenges.
This characteristic of societies describes how closely the social status of individuals is linked to that of their parents, with social immobility resulting in barriers that prevent talented and hardworking people from realising their full potential. On the other hand, high social mobility ensures that people have access to opportunities and a fair chance to achieve their dreams, leading to greater stability and prosperity for society at large.
About 700 policymakers, researchers, and practitioners from across the people, public, and private sectors gathered on 29-30 April 2025 to exchange ideas and research findings for advancing social mobility at the inaugural International Conference on Societies of Opportunity. The conference was organised by Singapore’s Ministry of Social and Family Development and the Institute of Policy Studies (IPS).
Said Dr Justin Lee, Senior Research Fellow and Head of Policy Lab at IPS: “The topic of social mobility is timely as societies worldwide grapple with economic and technological shifts. The conference examines how government policies, corporate investments, and community action can contribute to a more equitable and inclusive society.”
Over the two days, attendees gleaned insights from two keynote speeches, three plenary sessions, and four breakout sessions on effective ways that governments, community organisations, corporates, academia, and individuals can help to remove obstacles to progress and uplift those in the lower strata of society.
Close links between social mobility and economic growth
In the conference’s opening address, then Deputy Prime Minister Mr Heng Swee Keat contrasted the post-World War period of strong economic growth and social mobility with the current era where opportunities and progress are being stymied by deglobalisation, technological disruption, and ageing populations. Countries that once relied on the free flow of people, ideas, and capital across borders for growth must now put more effort into maximising their existing resources, especially their human capital.
“As a small node in the global economy, Singapore cannot block the forces of global change. Instead, we must create opportunities for our people, by staying relevant and useful to the world,” said Mr Heng, who cited building partnerships with other countries, investing in research and education to keep pace with technological developments, and shoring up support for the lowest rungs of society as some of Singapore’s strategies for doing so.
The need for both macro- and micro-level interventions to achieve social mobility targets was explored in keynote speeches by Professor Danny Quah, Dean and Li Ka Shing Professor in Economics at the Lee Kuan Yew School of Public Policy, and Professor Raj Chetty, William A. Ackman Professor of Public Economics and Director of Opportunity Insights at Harvard University.
Prof Quah illustrated the importance of macro-level interventions using the Growth-Mobility Curve, which emerged from comparisons of cross-country data showing that past economy-wide growth foreshadows high social mobility. “Economic growth provides the space and the opportunity for all members of society to advance, especially those at the bottom end of the income distribution,” he said.
Approaching the question from the other direction, Prof Chetty shared findings on the direct impact of childhood circumstances on economic opportunities and mobility, uncovered through analysing large-scale data sets.
For instance, one study of 20 million people born in the United States found that their upward economic mobility was strongly linked to the levels of inequality, stability of family structures, education, and social capital in the neighbourhoods where they grew up. Another study of children whose families moved from poorer to richer neighbourhoods, even just across the road, showed a clear pattern of higher earnings in adulthood, with greater gains realised by those who moved earlier in life.
“If you take a given child and change that child’s environment, that dramatically changes their life outcomes,” said Prof Chetty, noting that the findings have been replicated in other countries. “This is very encouraging for those interested in improving economic opportunity and social mobility, because it shows that this is a tractable problem.”
Perspectives from research and practice
Making welfare systems more universal and prioritising cross-class mixing to increase social capital were some of the recommendations put forward in the conference to make social mobility efforts more effective.
Professor Irene Ng of the Department of Social Work at the Faculty of Arts and Social Sciences (FASS) highlighted the challenges that needy individuals face in navigating in Singapore’s welfare system, where an emphasis on self-reliance and keeping welfare spending low has resulted in a wide variety of programmes with various application processes and requirements to follow.
“A low-income person’s mind is constantly preoccupied with how to pay the bills, how to put food on the table, and that leaves little cognitive space for other important tasks in life,” she said. “This has implications on how multiple programmes can cognitively overload low-income individuals, so that it has the counteracting effect of impairing their function and ability to succeed at work and comply with programmes.”
Citing research that shows more universal or generous welfare systems to be more successful at promoting social mobility, Prof Ng recommended consolidating programmes, simplifying processes, and incorporating automatic stabilisers like inflation indexing to ensure that assistance keeps pace with economic conditions.
To help lower-income groups increase their social capital, several speakers suggested ways to promote mixing between people of different socioeconomic classes, which can expose children to more potential career options and pave the way to opportunities through job referrals or advice.
“For opportunities to translate into uptake, people need the guiding hand of social capital, someone who comes alongside to say, ‘This is the way to go about it, this is what to say, this is how to say it,’” said Associate Professor Vincent Chua of the Department of Sociology and Anthropology at FASS, whose research found that having a mentor like a trusted teacher increases the probability of a young person becoming a university graduate. He proposed matching effective teachers to less popular schools and prioritising mentor-mentee matching for students of lower socioeconomic status.
Along similar lines, social capital can be built through engaging parents and community members as volunteers in school activities like reading programmes, said Emeritus Professor Esther Ho, Senior Research Fellow at the Department of Educational Administration and Policy and Director of the Hong Kong Centre for International Student Assessment, both at the Chinese University of Hong Kong. She noted that such programmes have a significant impact on reading habits and enjoyment especially among lower-income students whose parents may have less time to read with them at home.
Representatives from the social service, corporate, and philanthropic sectors underscored the need to ensure that invested capital drives outcomes and catalyses innovation, rather than simply funding well-intentioned programmes that may not yield meaningful results. This requires focusing on outcome metrics like learning improvements instead of activity metrics like teacher training hours, as well as forming strategic partnerships to tap the right expertise and leverage innovative mechanisms like blended finance.
Said Ms Tan Li San, CEO of the National Council of Social Service, “As society becomes more complex and the global environment becomes more uncertain, it’s even more critical to tap on different stakeholders who bring valuable expertise and assets that can help us create much greater social impact in our society.”
By Prof Lawrence Loh, Director at the Centre for Governance and Sustainability; and Belle Tran an MBA student, both from NUS Business SchoolThe Business Times Weekend, 24 May 2025, p30
At the level of molecules and cells, ketamine and dexmedetomidine work very differently, but in the operating room they do the same exact thing: anesthetize the patient. By demonstrating how these distinct drugs achieve the same result, a new study in animals by neuroscientists at The Picower Institute for Learning and Memory at MIT identifies a potential signature of unconsciousness that is readily measurable to improve anesthesiology care.What the two drugs have in common, the researchers disc
At the level of molecules and cells, ketamine and dexmedetomidine work very differently, but in the operating room they do the same exact thing: anesthetize the patient. By demonstrating how these distinct drugs achieve the same result, a new study in animals by neuroscientists at The Picower Institute for Learning and Memory at MIT identifies a potential signature of unconsciousness that is readily measurable to improve anesthesiology care.
What the two drugs have in common, the researchers discovered, is the way they push around brain waves, which are produced by the collective electrical activity of neurons. When brain waves are in phase, meaning the peaks and valleys of the waves are aligned, local groups of neurons in the brain’s cortex can share information to produce conscious cognitive functions such as attention, perception, and reasoning, says Picower Professor Earl K. Miller, senior author of the new study in Cell Reports. When brain waves fall out of phase, local communications, and therefore functions, fall apart, producing unconsciousness.
The finding, led by graduate student Alexandra Bardon, not only adds to scientists’ understanding of the dividing line between consciousness and unconsciousness, Miller says, but also could provide a common new measure for anesthesiologists who use a variety of different anesthetics to maintain patients on the proper side of that line during surgery.
“If you look at the way phase is shifted in our recordings, you can barely tell which drug it was,” says Miller, a faculty member in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “That’s valuable for medical practice. Plus if unconsciousness has a universal signature, it could also reveal the mechanisms that generate consciousness.”
If more anesthetic drugs are also shown to affect phase in the same way, then anesthesiologists might be able to use brain wave phase alignment as a reliable marker of unconsciousness as they titrate doses of anesthetic drugs, Miller says, regardless of which particular mix of drugs they are using. That insight could aid efforts to build closed-loop systems that can aid anesthesiologists by constantly adjusting drug dose based on brain wave measurements of the patient’s unconsciousness.
Miller has been collaborating with study co-author Emery N. Brown, an anesthesiologist and Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering in the Picower Institute, on building such a system. In a recent clinical trial with colleagues in Japan, Brown demonstrated that monitoring brain wave power signals using EEG enabled an anesthesiologist to use much less sevoflurane during surgery with young children. The reduced doses proved safe and were associated with many improved clinical outcomes, including a reduced incidence of post-operative delirium.
Phase findings
Neuroscientists studying anesthesia have rarely paid attention to phase, but in the new study, Bardon, Brown, and Miller’s team made a point of it as they anesthetized two animals.
After the animals lost consciousness, the measurements indicated a substantial increase in “phase locking,” especially at low frequencies. Phase locking means that the relative differences in phase remained more stable. But what caught the researchers’ attention were the differences that became locked in: within each hemisphere, regardless of which anesthetic they used, brain wave phase became misaligned between the dorsolateral and ventrolateral regions of the prefrontal cortex.
Surprisingly, brain wave phase across hemispheres became more aligned, not less. But Miller notes that case is still a big shift from the conscious state, in which brain hemispheres are typically not aligned well, so the finding is a further indication that major changes of phase alignment, albeit in different ways at different distances, are a correlate of unconsciousness compared to wakefulness.
“The increase in interhemispheric alignment of activity by anesthetics seems to reverse the pattern observed in the awake, cognitively engaged brain,” the Bardon and Miller team wrote in Cell Reports.
Determined by distance
Distance proved to be a major factor in determining the change in phase alignment. Even across the 2.5 millimeters of a single electrode array, low-frequency waves moved 20-30 degrees out of alignment. Across the 20 or so millimeters between arrays in the upper (dorsolateral) and lower (ventrolateral) regions within a hemisphere, that would mean a roughly 180-degree shift in phase alignment, which is a complete offset of the waves.
The dependence on distance is consistent with the idea of waves traveling across the cortex, Miller says. Indeed, in a 2022 study, Miller and Brown’s labs showed that the anesthetic propofol induced a powerful low-frequency traveling wave that swept straight across the cortex, overwhelming higher-frequency straight and rotating waves.
The new results raise many opportunities for follow-up studies, Miller says. Does propofol also produce this signature of changed phase alignment? What role do traveling waves play in the phenomenon? And given that sleep is also characterized by increased power in slow wave frequencies, but is definitely not the same state as anesthesia-induced unconsciousness, could phase alignment explain the difference?
In addition to Bardon, Brown, and Miller, the paper’s other authors are Jesus Ballesteros, Scott Brincat, Jefferson Roy, Meredith Mahnke, and Yumiko Ishizawa.
The U.S. Department of Energy, the National Institutes of Health, the Simons Center for the Social Brain, the Freedom Together Foundation, and the Picower Institute provided support for the research.
Researchers studying how different anesthetic drugs achieve the same result saw that brain waves within the same region on the same side of the brain shifted out of phase, like the waves in this image.
At the level of molecules and cells, ketamine and dexmedetomidine work very differently, but in the operating room they do the same exact thing: anesthetize the patient. By demonstrating how these distinct drugs achieve the same result, a new study in animals by neuroscientists at The Picower Institute for Learning and Memory at MIT identifies a potential signature of unconsciousness that is readily measurable to improve anesthesiology care.What the two drugs have in common, the researchers disc
At the level of molecules and cells, ketamine and dexmedetomidine work very differently, but in the operating room they do the same exact thing: anesthetize the patient. By demonstrating how these distinct drugs achieve the same result, a new study in animals by neuroscientists at The Picower Institute for Learning and Memory at MIT identifies a potential signature of unconsciousness that is readily measurable to improve anesthesiology care.
What the two drugs have in common, the researchers discovered, is the way they push around brain waves, which are produced by the collective electrical activity of neurons. When brain waves are in phase, meaning the peaks and valleys of the waves are aligned, local groups of neurons in the brain’s cortex can share information to produce conscious cognitive functions such as attention, perception, and reasoning, says Picower Professor Earl K. Miller, senior author of the new study in Cell Reports. When brain waves fall out of phase, local communications, and therefore functions, fall apart, producing unconsciousness.
The finding, led by graduate student Alexandra Bardon, not only adds to scientists’ understanding of the dividing line between consciousness and unconsciousness, Miller says, but also could provide a common new measure for anesthesiologists who use a variety of different anesthetics to maintain patients on the proper side of that line during surgery.
“If you look at the way phase is shifted in our recordings, you can barely tell which drug it was,” says Miller, a faculty member in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “That’s valuable for medical practice. Plus if unconsciousness has a universal signature, it could also reveal the mechanisms that generate consciousness.”
If more anesthetic drugs are also shown to affect phase in the same way, then anesthesiologists might be able to use brain wave phase alignment as a reliable marker of unconsciousness as they titrate doses of anesthetic drugs, Miller says, regardless of which particular mix of drugs they are using. That insight could aid efforts to build closed-loop systems that can aid anesthesiologists by constantly adjusting drug dose based on brain wave measurements of the patient’s unconsciousness.
Miller has been collaborating with study co-author Emery N. Brown, an anesthesiologist and Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering in the Picower Institute, on building such a system. In a recent clinical trial with colleagues in Japan, Brown demonstrated that monitoring brain wave power signals using EEG enabled an anesthesiologist to use much less sevoflurane during surgery with young children. The reduced doses proved safe and were associated with many improved clinical outcomes, including a reduced incidence of post-operative delirium.
Phase findings
Neuroscientists studying anesthesia have rarely paid attention to phase, but in the new study, Bardon, Brown, and Miller’s team made a point of it as they anesthetized two animals.
After the animals lost consciousness, the measurements indicated a substantial increase in “phase locking,” especially at low frequencies. Phase locking means that the relative differences in phase remained more stable. But what caught the researchers’ attention were the differences that became locked in: within each hemisphere, regardless of which anesthetic they used, brain wave phase became misaligned between the dorsolateral and ventrolateral regions of the prefrontal cortex.
Surprisingly, brain wave phase across hemispheres became more aligned, not less. But Miller notes that case is still a big shift from the conscious state, in which brain hemispheres are typically not aligned well, so the finding is a further indication that major changes of phase alignment, albeit in different ways at different distances, are a correlate of unconsciousness compared to wakefulness.
“The increase in interhemispheric alignment of activity by anesthetics seems to reverse the pattern observed in the awake, cognitively engaged brain,” the Bardon and Miller team wrote in Cell Reports.
Determined by distance
Distance proved to be a major factor in determining the change in phase alignment. Even across the 2.5 millimeters of a single electrode array, low-frequency waves moved 20-30 degrees out of alignment. Across the 20 or so millimeters between arrays in the upper (dorsolateral) and lower (ventrolateral) regions within a hemisphere, that would mean a roughly 180-degree shift in phase alignment, which is a complete offset of the waves.
The dependence on distance is consistent with the idea of waves traveling across the cortex, Miller says. Indeed, in a 2022 study, Miller and Brown’s labs showed that the anesthetic propofol induced a powerful low-frequency traveling wave that swept straight across the cortex, overwhelming higher-frequency straight and rotating waves.
The new results raise many opportunities for follow-up studies, Miller says. Does propofol also produce this signature of changed phase alignment? What role do traveling waves play in the phenomenon? And given that sleep is also characterized by increased power in slow wave frequencies, but is definitely not the same state as anesthesia-induced unconsciousness, could phase alignment explain the difference?
In addition to Bardon, Brown, and Miller, the paper’s other authors are Jesus Ballesteros, Scott Brincat, Jefferson Roy, Meredith Mahnke, and Yumiko Ishizawa.
The U.S. Department of Energy, the National Institutes of Health, the Simons Center for the Social Brain, the Freedom Together Foundation, and the Picower Institute provided support for the research.
Researchers studying how different anesthetic drugs achieve the same result saw that brain waves within the same region on the same side of the brain shifted out of phase, like the waves in this image.
Earlier this year, the first of two space domain awareness (SDA) payloads, called the QZS6-HP1, launched from Tanegashima, Japan. Recently, that payload collected its first imaging data, a moment known as first light. Sponsored by the United States Space Force (USSF), MIT Lincoln Laboratory designed, built, and delivered the two payloads as part of a U.S. and Japanese partnership program called the Quasi-Zenith Satellite System Hosted Payload (QZSS-HP). The program demonstrates a shared commitme
Earlier this year, the first of two space domain awareness (SDA) payloads, called the QZS6-HP1, launched from Tanegashima, Japan. Recently, that payload collected its first imaging data, a moment known as first light. Sponsored by the United States Space Force (USSF), MIT Lincoln Laboratory designed, built, and delivered the two payloads as part of a U.S. and Japanese partnership program called the Quasi-Zenith Satellite System Hosted Payload (QZSS-HP). The program demonstrates a shared commitment to increasing space partnerships in alignment with both allies' national space policies and contributes to integrated deterrence and international security. Throughout the program, Lincoln Laboratory worked side-by-side with the USSF, Japan's National Space Policy Secretariat, and the Mitsubishi Electric Corp.
For the past few decades, satellite launches across the globe have steadily increased as governments and private commercial companies initiate and progress their space-related activities, creating a more crowded space environment. Both the United States and Japan are interested in fortifying SDA within the crowded geosynchronous orbit (GEO) space. This international program began in 2019 as a way to meet this need by pairing a U.S. SDA sensor with the ongoing Japanese QZSS program. The QZSS is Japan's domestically engineered and manufactured position, navigation, and timing space system, designed for users in Japan and currently augmenting the U.S. Global Positioning System.
The USSF engaged Lincoln Laboratory for this program because of its extensive experience in developing SDA sensors, particularly for the ORS-5/SensorSat satellite, which launched in 2017. SensorSat is a small, low-cost alternative to current U.S. capabilities in detecting and tracking GEO satellites. The QZSS payloads leverage SensorSat's compact optical design that allows their sensors to passively survey the sky with high performance. Unlike SensorSat, however, which sends its collected data to a ground system for processing, the laboratory's QZSS payloads accomplish the majority of their data processing on-orbit. This alternative processing approach reduces the size of the downlinked data by three orders of magnitude, making it an enabling architecture for bandwidth-constrained missions.
"The payload's passive searching offloads other SDA assets by providing continuous monitoring, which creates a more resilient space architecture," says Ashley Long, Lincoln Laboratory's program manager for QZSS-HP. These satellites will deliver near-real-time data to the U.S. Space Surveillance Network.
The second QZSS payload has been integrated onto Japan's QZS-7 satellite and is expected to launch in late 2025. For QZS6-HP1, the Lincoln Laboratory team is now conducting on-orbit testing.
Emily Clements, a deputy manager for the program, says that reaching the first-light stage is a significant milestone. "For first light to succeed, every part of the system has to work, including the laboratory-fabricated sensor and the payload's many supporting subsystems, as well as data interfaces with Japan and the U.S. ground systems receiving the data," she says. "This moment represents the culmination of years of hard work and international partnership, paving the way for more comprehensive SDA monitoring of GEO."
Over the next few months, the Lincoln Laboratory team will refine sensor parameters based on on-orbit data to maximize performance. The team will then continue to support operations for the lifetime of the mission.
"While originally conceived to be a demonstration mission and a pathfinder for international collaboration, the QZSS-HP promises to provide strong operational utility for the United States," Long says. "Additionally, the payload design has been transferred to the government, allowing for similar payloads to be built and delivered, further extending the reach and impact of this mission."
Campus & Community
Overseers announce new president, vice chair
Monica Bharel (left) and Sylvia Mathews Burwell.Niles Singer/Harvard Staff Photographer
June 5, 2025
7 min read
Sylvia Mathews Burwell and Monica Bharel to assume leadership roles for 2025-2026
Sylvia Mathews Burwell ’87, former president of American University and former secretary of the U.S. Department of Health and Human Services, ha
Sylvia Mathews Burwell and Monica Bharel to assume leadership roles for 2025-2026
Sylvia Mathews Burwell ’87, former president of American University and former secretary of the U.S. Department of Health and Human Services, has been elected president of the Harvard University Board of Overseers for the 2025-2026 academic year.
Monica Bharel, M.P.H. ’12, a physician, public health leader at Google Health, and former commissioner of the Massachusetts Department of Public Health, will serve as vice chair of the board’s executive committee for the same term.
Burwell and Bharel assume the board’s top leadership roles succeeding Vivian Hunt ’89, M.B.A. ’95, chief innovation officer of UnitedHealth Group, and Tyler Jacks ’83, a cancer genetics research expert and professor at the Massachusetts Institute of Technology, who served in the roles over the past academic year.
“Sylvia Burwell and Monica Bharel are accomplished alumni leaders whose experience overcoming complex challenges under extraordinary circumstances will serve Harvard well,” said President Alan Garber. “At the helm of a university, at the highest levels of government, and in two of the world’s largest philanthropic foundations, Sylvia has demonstrated a keen understanding of large multifaceted organizations and what it takes to advance them. As a physician, a public health expert, and a government leader, Monica has combined compassion with evidence-based solutions to keep people healthy throughout the region and across the country. I am grateful to them both for their leadership and for their commitment to the University.”
The Board of Overseers is one of Harvard’s two governing boards and its members are made up of and elected by Harvard alumni. Formally established in 1642, the board plays an integral role in the governance of the University. As a central part of its work, the board directs the visitation process, the primary means for periodic external assessment of Harvard’s Schools and departments. Through its array of standing committees, and the roughly 50 visiting committees that report to them, the board probes the quality of Harvard’s programs and assures that the University remains true to its charter as a place of learning.
More generally, drawing on its members’ diverse experience and expertise, the board provides counsel to the University’s leadership on priorities, plans, and strategic initiatives. The board also has the power of consent to certain actions, such as the election of members of the Corporation, Harvard’s other governing board.
Sylvia Mathews Burwell
Sylvia Mathews Burwell is a widely experienced leader who has served at the highest levels of government, philanthropy, and academia. Burwell was the 15th president of American University, after having served as Secretary of Health and Human Services from 2014 to 2017 and director of the Office of Management and Budget from 2013 to 2014. As HHS Secretary, she managed a trillion-dollar department with 12 operating divisions — including the National Institutes of Health, U.S. Food and Drug Administration, and Medicaid and Medicare programs.
“It’s an honor to serve as president of the Board of Overseers in the year ahead,” said Burwell.
“I came to Harvard as a freshman, having grown up in Hinton, West Virginia, where everyone knows one another, and where the idea of community is fundamental,” she continued. “My grandparents immigrated to this country with hopes and dreams for their children and their families. My background, together with my time at Harvard, has shaped my understanding of the importance of contributing to this nation and the role community plays in the health of our institutions and country.
“This is a time of serious consequence for higher education, our nation’s students, and for Harvard. I look forward to working closely with President Garber, with my colleagues on the Board of Overseers, with members of the Harvard campus and alumni community to listen and to advance the University’s core teaching, learning, and research mission so that other students can benefit and the University can continue its work improving the lives, livelihoods, and communities of people across the country and around the world.”
As AU president, Burwell steered the university through the COVID-19 pandemic and led the development and implementation of the Changemakers for a Changing World strategic plan, as well as the $500 million Change Can’t Wait campaign, the most successful such campaign in the university’s history. The campaign resulted in the creation of four new and expanded research centers, eight endowed faculty positions, and more than 170 scholarships. Under her leadership, AU opened the Sine Institute for Policy and Politics, the Khan Cyber and Economic Security Institute, and the LEED-Gold Hall of Science.
Burwell also held executive positions at two of the largest foundations in the world — she served as chief operating officer and president of the Global Development Program at the Bill & Melinda Gates Foundation in Seattle, and as president of the Walmart Foundation based in Bentonville, Arkansas.
She has served on numerous higher education boards and is on the board of Kimberly Clark, Guidewell Florida Blue, and the Council on Foreign Relations. She is a past board member of the University of Washington Medical Center.
Born and raised in Hinton, a small West Virginia town with a population of around 3,000, Burwell’s family was committed to service in their community. Her mother was a teacher who also served as mayor for nearly a decade. Her father was an optometrist by trade but occasionally filled in as minister at the local Episcopal church when needed. Both sets of maternal and paternal grandparents emigrated from Greece.
Burwell concentrated in political science and government at Harvard. She received an A.B. in philosophy, politics, and economics from the University of Oxford, which she attended as a Rhodes Scholar in 1990.
Monica Bharel
Monica Bharel is a physician, public health leader, medical educator, and public servant. As the global clinical lead in public sector and public health at Google, Bharel works to harness technology to solve public health challenges, using the power of data and analytics to drive innovations in advancing health outcomes for all.
“It’s an immense honor to serve in this role alongside my fellow Overseers and with President Garber and leaders across the University,” said Bharel. “My time as a student at Harvard was transformative. In addition to the courses and analytical frameworks I was exposed to, the fellowship and camaraderie of people working together to solve complex problems expanded my own capacity and ability to imagine new ways of approaching solutions that work for everyone.”
Bharel was appointed by Gov. Charlie Baker in 2015 as Commissioner of the Massachusetts Department of Public Health, serving as the Commonwealth’s chief physician from 2015 to 2021. During that time, she oversaw the Massachusetts public health response to the COVID pandemic as well as several other public health crises, including the opioid epidemic.
As commissioner, Bharel oversaw a public health workforce of nearly 3,000 and an expansive department covering a wide portfolio of health-related issues, including lead poisoning, health equity, and injury prevention. Bharel was a leader in the creation of the Public Health Data Warehouse in 2017, as part of the newly created Office of Population Health. Under her leadership, Massachusetts ranked nationally among the healthiest states in the nation.
Bharel also served as a senior adviser to the mayor of Boston in 2021-22 and was appointed by Mayor Michelle Wu to lead the city’s response to the humanitarian crisis in the area known as Mass and Cass.
Bharel is a board-certified internist who has practiced general internal medicine for more than 20 years, including at Massachusetts General Hospital, Boston Medical Center, neighborhood health centers, the Veterans Administration, and nonprofit organizations. She has served on the faculty of Harvard Medical School, Boston University Chobanian & Avedisian School of Medicine and Harvard T.H. Chan School of Public Health. Prior to becoming commissioner, she was chief medical officer of Boston Health Care for the Homeless Program.
She holds a Master of Public Health degree through the Commonwealth Fund/Harvard University Fellowship in Minority Health Policy. She holds a medical degree from Boston University Chobanian & Avedisian School of Medicine, and completed a residency and chief residency in internal medicine at Boston City Hospital/Boston Medical Center.
First held in 1864, Varsity Athletics remains an enduring symbol of sporting excellence and tradition. This year’s event, hosted at Wilberforce Road Sports Ground in Cambridge, was made even more special by a prestigious recognition from World Athletics: the awarding of two Heritage Plaques to Cambridge University Athletic Club (CUAC) and the Varsity Match itself.
World Athletics Heritage Plaque
Founded in 1857, CUAC is one of the oldest athletics clubs in the
First held in 1864, Varsity Athletics remains an enduring symbol of sporting excellence and tradition. This year’s event, hosted at Wilberforce Road Sports Ground in Cambridge, was made even more special by a prestigious recognition from World Athletics: the awarding of two Heritage Plaques to Cambridge University Athletic Club (CUAC) and the Varsity Match itself.
Founded in 1857, CUAC is one of the oldest athletics clubs in the world. It played a pivotal role in the development of modern athletics, contributing to the rules and formats that govern the sport today. "Cambridge University Athletic Club is among a small group of pioneering organisations that helped shape modern athletics," World Athletics noted in its announcement.
In honour of this distinguished history, World Athletics CEO and Cambridge alumnus Jon Ridgeon (Magdalene College) returned to his alma mater to present the plaques during the Varsity weekend.
Living up to the historic occasion, fierce but friendly rivalry was on display, with Cambridge securing victories in:
Men’s Blues
Para Team
Men's 2nds
Women’s 2nds
In an interview with Varsity newspaper ahead of the Athletics Varsity, CUAC President Jess Poon reflected on the club’s evolution and the importance of the Varsity Matches. She highlighted the club’s embrace of inclusivity, particularly with the expansion of women's and para-athletics matches, and celebrated the sense of tradition and camaraderie that continues to define the event.
This milestone celebration aligns closely with the University’s priority to encourage participation in sport and physical activity at all levels. Sport plays a critical role in supporting mental wellbeing, fostering leadership and communication skills, and enhancing employability among students.
Across the University, activity priorities include:
Club Support Programme: Aimed at helping sports clubs like CUAC deliver high-quality training and competition experiences, ensuring sustainability and growth.
University of Cambridge Athlete Performance Programme (UCAPP): Providing specialist support for high-performing athletes, enabling them to excel both in their sport and academically.
Active Students Initiative: Promoting sport and physical activity for all students, regardless of ability or experience level, through programmes like 'Give it a Go', designed to remove barriers and encourage lifelong engagement with physical activity.
Bhaskar Vira, Pro-Vice-Chancellor for Education and Chair of the Sports Committee, has expressed the University’s enthusiasm for supporting sport: "Involvement in physical activity and sports provides a much-needed release from the intense pressures that are associated with life at Cambridge. I firmly believe that these are inherently complementary pursuits, allowing participants to achieve a balance between their work commitments and their own personal wellbeing."
The 150th Men's, 50th Women's, and 2nd Para Athletics Varsity Matches not only celebrated a rich and trailblazing past but also pointed towards a vibrant future, powered by a University-wide commitment to excellence, inclusion, and wellbeing in sport.
As Cambridge looks to build on this legacy, the University invites alumni and supporters to help sustain and grow these opportunities - ensuring that generations of Cambridge students continue to benefit from the profound personal, academic, and societal advantages that sport and physical activity bring.
The world’s oldest athletics competition — the annual contest between Cambridge and Oxford — reached a landmark celebration this year, commemorating 150 years of men's competition, 50 years of women's competition, and the second year of the para-athletics Varsity.
Ready for that long-awaited summer vacation? First, you’ll need to pack all items required for your trip into a suitcase, making sure everything fits securely without crushing anything fragile.Because humans possess strong visual and geometric reasoning skills, this is usually a straightforward problem, even if it may take a bit of finagling to squeeze everything in.To a robot, though, it is an extremely complex planning challenge that requires thinking simultaneously about many actions, constra
Ready for that long-awaited summer vacation? First, you’ll need to pack all items required for your trip into a suitcase, making sure everything fits securely without crushing anything fragile.
Because humans possess strong visual and geometric reasoning skills, this is usually a straightforward problem, even if it may take a bit of finagling to squeeze everything in.
To a robot, though, it is an extremely complex planning challenge that requires thinking simultaneously about many actions, constraints, and mechanical capabilities. Finding an effective solution could take the robot a very long time — if it can even come up with one.
Researchers from MIT and NVIDIA Research have developed a novel algorithm that dramatically speeds up the robot’s planning process. Their approach enables a robot to “think ahead” by evaluating thousands of possible solutions in parallel and then refining the best ones to meet the constraints of the robot and its environment.
Instead of testing each potential action one at a time, like many existing approaches, this new method considers thousands of actions simultaneously, solving multistep manipulation problems in a matter of seconds.
The researchers harness the massive computational power of specialized processors called graphics processing units (GPUs) to enable this speedup.
In a factory or warehouse, their technique could enable robots to rapidly determine how to manipulate and tightly pack items that have different shapes and sizes without damaging them, knocking anything over, or colliding with obstacles, even in a narrow space.
“This would be very helpful in industrial settings where time really does matter and you need to find an effective solution as fast as possible. If your algorithm takes minutes to find a plan, as opposed to seconds, that costs the business money,” says MIT graduate student William Shen SM ’23, lead author of the paper on this technique.
He is joined on the paper by Caelan Garrett ’15, MEng ’15, PhD ’21, a senior research scientist at NVIDIA Research; Nishanth Kumar, an MIT graduate student; Ankit Goyal, a NVIDIA research scientist; Tucker Hermans, a NVIDIA research scientist and associate professor at the University of Utah; Leslie Pack Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Tomás Lozano-Pérez, an MIT professor of computer science and engineering and a member of CSAIL; and Fabio Ramos, principal research scientist at NVIDIA and a professor at the University of Sydney. The research will be presented at the Robotics: Science and Systems Conference.
Planning in parallel
The researchers’ algorithm is designed for what is called task and motion planning (TAMP). The goal of a TAMP algorithm is to come up with a task plan for a robot, which is a high-level sequence of actions, along with a motion plan, which includes low-level action parameters, like joint positions and gripper orientation, that complete that high-level plan.
To create a plan for packing items in a box, a robot needs to reason about many variables, such as the final orientation of packed objects so they fit together, as well as how it is going to pick them up and manipulate them using its arm and gripper.
It must do this while determining how to avoid collisions and achieve any user-specified constraints, such as a certain order in which to pack items.
With so many potential sequences of actions, sampling possible solutions at random and trying one at a time could take an extremely long time.
“It is a very large search space, and a lot of actions the robot does in that space don’t actually achieve anything productive,” Garrett adds.
Instead, the researchers’ algorithm, called cuTAMP, which is accelerated using a parallel computing platform called CUDA, simulates and refines thousands of solutions in parallel. It does this by combining two techniques, sampling and optimization.
Sampling involves choosing a solution to try. But rather than sampling solutions randomly, cuTAMP limits the range of potential solutions to those most likely to satisfy the problem’s constraints. This modified sampling procedure allows cuTAMP to broadly explore potential solutions while narrowing down the sampling space.
“Once we combine the outputs of these samples, we get a much better starting point than if we sampled randomly. This ensures we can find solutions more quickly during optimization,” Shen says.
Once cuTAMP has generated that set of samples, it performs a parallelized optimization procedure that computes a cost, which corresponds to how well each sample avoids collisions and satisfies the motion constraints of the robot, as well as any user-defined objectives.
It updates the samples in parallel, chooses the best candidates, and repeats the process until it narrows them down to a successful solution.
Harnessing accelerated computing
The researchers leverage GPUs, specialized processors that are far more powerful for parallel computation and workloads than general-purpose CPUs, to scale up the number of solutions they can sample and optimize simultaneously. This maximized the performance of their algorithm.
“Using GPUs, the computational cost of optimizing one solution is the same as optimizing hundreds or thousands of solutions,” Shen explains.
When they tested their approach on Tetris-like packing challenges in simulation, cuTAMP took only a few seconds to find successful, collision-free plans that might take sequential planning approaches much longer to solve.
And when deployed on a real robotic arm, the algorithm always found a solution in under 30 seconds.
The system works across robots and has been tested on a robotic arm at MIT and a humanoid robot at NVIDIA. Since cuTAMP is not a machine-learning algorithm, it requires no training data, which could enable it to be readily deployed in many situations.
“You can give it a brand-new problem and it will provably solve it,” Garrett says.
The algorithm is generalizable to situations beyond packing, like a robot using tools. A user could incorporate different skill types into the system to expand a robot’s capabilities automatically.
In the future, the researchers want to leverage large language models and vision language models within cuTAMP, enabling a robot to formulate and execute a plan that achieves specific objectives based on voice commands from a user.
This work is supported, in part, by the National Science Foundation (NSF), Air Force Office for Scientific Research, Office of Naval Research, MIT Quest for Intelligence, NVIDIA, and the Robotics and Artificial Intelligence Institute.
Researchers have introduced a novel algorithm that enables a robot to “think ahead” by evaluating thousands of possible solutions in parallel and then refining the best ones to meet the constraints of the robot and its environment.
MIT physicists have demonstrated a new form of magnetism that could one day be harnessed to build faster, denser, and less power-hungry “spintronic” memory chips.The new magnetic state is a mash-up of two main forms of magnetism: the ferromagnetism of everyday fridge magnets and compass needles, and antiferromagnetism, in which materials have magnetic properties at the microscale yet are not macroscopically magnetized.Now, the MIT team has demonstrated a new form of magnetism, termed “p-wave mag
MIT physicists have demonstrated a new form of magnetism that could one day be harnessed to build faster, denser, and less power-hungry “spintronic” memory chips.
The new magnetic state is a mash-up of two main forms of magnetism: the ferromagnetism of everyday fridge magnets and compass needles, and antiferromagnetism, in which materials have magnetic properties at the microscale yet are not macroscopically magnetized.
Now, the MIT team has demonstrated a new form of magnetism, termed “p-wave magnetism.”
Physicists have long observed that electrons of atoms in regular ferromagnets share the same orientation of “spin,” like so many tiny compasses pointing in the same direction. This spin alignment generates a magnetic field, which gives a ferromagnet its inherent magnetism. Electrons belonging to magnetic atoms in an antiferromagnet also have spin, although these spins alternate, with electrons orbiting neighboring atoms aligning their spins antiparallel to each other. Taken together, the equal and opposite spins cancel out, and the antiferromagnet does not exhibit macroscopic magnetization.
The team discovered the new p-wave magnetism in nickel iodide (NiI2), a two-dimensional crystalline material that they synthesized in the lab. Like a ferromagnet, the electrons exhibit a preferred spin orientation, and, like an antiferromagnet, equal populations of opposite spins result in a net cancellation. However, the spins on the nickel atoms exhibit a unique pattern, forming spiral-like configurations within the material that are mirror-images of each other, much like the left hand is the right hand’s mirror image.
What’s more, the researchers found this spiral spin configuration enabled them to carry out “spin switching”: Depending on the direction of spiraling spins in the material, they could apply a small electric field in a related direction to easily flip a left-handed spiral of spins into a right-handed spiral of spins, and vice-versa.
The ability to switch electron spins is at the heart of “spintronics,” which is a proposed alternative to conventional electronics. With this approach, data can be written in the form of an electron’s spin, rather than its electronic charge, potentially allowing orders of magnitude more data to be packed onto a device while using far less power to write and read that data.
“We showed that this new form of magnetism can be manipulated electrically,” says Qian Song, a research scientist in MIT’s Materials Research Laboratory. “This breakthrough paves the way for a new class of ultrafast, compact, energy-efficient, and nonvolatile magnetic memory devices.”
Song and his colleagues published their results May 28 in the journal Nature. MIT co-authors include Connor Occhialini, Batyr Ilyas, Emre Ergeçen, Nuh Gedik, and Riccardo Comin, along with Rafael Fernandes at the University of Illinois Urbana-Champaign, and collaborators from multiple other institutions.
Connecting the dots
The discovery expands on work by Comin’s group in 2022. At that time, the team probed the magnetic properties of the same material, nickel iodide. At the microscopic level, nickel iodide resembles a triangular lattice of nickel and iodine atoms. Nickel is the material’s main magnetic ingredient, as the electrons on the nickel atoms exhibit spin, while those on iodine atoms do not.
In those experiments, the team observed that the spins of those nickel atoms were arranged in a spiral pattern throughout the material’s lattice, and that this pattern could spiral in two different orientations.
At the time, Comin had no idea that this unique pattern of atomic spins could enable precise switching of spins in surrounding electrons. This possibility was later raised by collaborator Rafael Fernandes, who along with other theorists was intrigued by a recently proposed idea for a new, unconventional, “p-wave” magnet, in which electrons moving along opposite directions in the material would have their spins aligned in opposite directions.
Fernandes and his colleagues recognized that if the spins of atoms in a material form the geometric spiral arrangement that Comin observed in nickel iodide, that would be a realization of a “p-wave” magnet. Then, when an electric field is applied to switch the “handedness” of the spiral, it should also switch the spin alignment of the electrons traveling along the same direction.
In other words, such a p-wave magnet could enable simple and controllable switching of electron spins, in a way that could be harnessed for spintronic applications.
“It was a completely new idea at the time, and we decided to test it experimentally because we realized nickel iodide was a good candidate to show this kind of p-wave magnet effect,” Comin says.
Spin current
For their new study, the team synthesized single-crystal flakes of nickel iodide by first depositing powders of the respective elements on a crystalline substrate, which they placed in a high-temperature furnace. The process causes the elements to settle into layers, each arranged microscopically in a triangular lattice of nickel and iodine atoms.
“What comes out of the oven are samples that are several millimeters wide and thin, like cracker bread,” Comin says. “We then exfoliate the material, peeling off even smaller flakes, each several microns wide, and a few tens of nanometers thin.”
The researchers wanted to know if, indeed, the spiral geometry of the nickel atoms’s spins would force electrons traveling in opposite directions to have opposite spins, like what Fernandes expected a p-wave magnet should exhibit. To observe this, the group applied to each flake a beam of circularly polarized light — light that produces an electric field that rotates in a particular direction, for instance, either clockwise or counterclockwise.
They reasoned that if travelling electrons interacting with the spin spirals have a spin that is aligned in the same direction, then incoming light, polarized in that same direction, should resonate and produce a characteristic signal. Such a signal would confirm that the traveling electrons’ spins align because of the spiral configuration, and furthermore, that the material does in fact exhibit p-wave magnetism.
And indeed, that’s what the group found. In experiments with multiple nickel iodide flakes, the researchers directly observed that the direction of the electron’s spin was correlated to the handedness of the light used to excite those electrons. Such is a telltale signature of p-wave magnetism, here observed for the first time.
Going a step further, they looked to see whether they could switch the spins of the electrons by applying an electric field, or a small amount of voltage, along different directions through the material. They found that when the direction of the electric field was in line with the direction of the spin spiral, the effect switched electrons along the route to spin in the same direction, producing a current of like-spinning electrons.
“With such a current of spin, you can do interesting things at the device level, for instance, you could flip magnetic domains that can be used for control of a magnetic bit,” Comin explains. “These spintronic effects are more efficient than conventional electronics because you’re just moving spins around, rather than moving charges. That means you’re not subject to any dissipation effects that generate heat, which is essentially the reason computers heat up.”
“We just need a small electric field to control this magnetic switching,” Song adds. “P-wave magnets could save five orders of magnitude of energy. Which is huge.”
“We are excited to see these cutting-edge experiments confirm our prediction of p-wave spin polarized states,” says Libor Šmejkal, head of the Max Planck Research Group in Dresden, Germany, who is one of the authors of the theoretical work that proposed the concept of p-wave magnetism but was not involved in the new paper. “The demonstration of electrically switchable p-wave spin polarization also highlights the promising applications of unconventional magnetic states.”
The team observed p-wave magnetism in nickel iodide flakes, only at ultracold temperatures of about 60 kelvins.
“That’s below liquid nitrogen, which is not necessarily practical for applications,” Comin says. “But now that we’ve realized this new state of magnetism, the next frontier is finding a material with these properties, at room temperature. Then we can apply this to a spintronic device.”
This research was supported, in part, by the National Science Foundation, the Department of Energy, and the Air Force Office of Scientific Research.
Spiral magnetic order (light blue arrows) on the triangular lattice of NiI2 (black spheres represent Ni atoms) enables electrically switchable (white jagged lines) p-wave magnetism. Spin-up (orange dots) and spin-down (blue dots) electrons propagate in opposite directions and reverse their paths when the handedness of the spiral magnetic order is switched (left vs. right).
Education's critical role in the Australia-India relationship took centre stage at a high-level forum hosted at the University of Melbournes Global Centre in Delhi, India, yesterday.
Education's critical role in the Australia-India relationship took centre stage at a high-level forum hosted at the University of Melbournes Global Centre in Delhi, India, yesterday.
The Potter Museum of Art, the flagship art museum of the University of Melbourne, reopens today with the landmark exhibition, 65,000 Years: A Short History of Australian Art.
The Potter Museum of Art, the flagship art museum of the University of Melbourne, reopens today with the landmark exhibition, 65,000 Years: A Short History of Australian Art.
Health
Wildfire smoke can harm heart and lungs even after the fire has ended
Maya Brownstein
Harvard Chan School Communications
June 4, 2025
5 min read
First study to fully assess its impact on all major types of cardiovascular, respiratory diseases
Being exposed to lingering fine particulate matter (PM2.5) from wildfire smoke can have health effects up to three months afterwards, well
Wildfire smoke can harm heart and lungs even after the fire has ended
Maya Brownstein
Harvard Chan School Communications
5 min read
First study to fully assess its impact on all major types of cardiovascular, respiratory diseases
Being exposed to lingering fine particulate matter (PM2.5) from wildfire smoke can have health effects up to three months afterwards, well beyond the couple of days that previous studies have identified, and the exposure can occur even after the fires have ended.
These findings were reported in a new study in Epidemiology published on May 28 by researchers at the Icahn School of Medicine at Mount Sinai and Harvard T.H. Chan School of Public Health.
This medium-term exposure to PM2.5 from wildfire smoke was associated with increased risks for various cardiorespiratory conditions, including ischemic heart disease, cerebrovascular disease, arrhythmia, hypertension, pneumonia, chronic obstructive pulmonary disease, and asthma.
“Even brief exposures from smaller fires that last only a few days can lead to long-lasting health effects.“
Yaguang Wei, associate, Harvard Chan School Department of Environmental Health
PM2.5 is a mixture of tiny particles and a major component of wildfire smoke. Compared to non-smoke PM2.5, smoke PM2.5 is smaller in size and is considered more dangerous because it is richer in carbonaceous compounds, which are more likely to induce oxidative stress and inflammation and thus pose a greater threat to public health.
The study also showed larger effects in neighborhoods with more vegetation or more disadvantages (e.g., lower education, more unemployment, lower housing quality, and higher poverty), as well as among people who have smoked at any point in their life.
This is the first study to examine the medium-term health effects of wildfire smoke. It is also the first to comprehensively assess its impact on all major types of cardiovascular and respiratory diseases.
“Wildfire activity in the United States has increased substantially over the past few decades, resulting in an increase in emissions that have begun to reverse decades of air quality improvements,” said corresponding and first author Yaguang Wei, assistant professor of environmental medicine at the Icahn School of Medicine and department associate in the Department of Environmental Health at Harvard Chan School. “Even brief exposures from smaller fires that last only a few days can lead to long-lasting health effects. There is an urgent need for research to fully understand the health impacts of wildfire smoke to raise awareness among public and health professionals, as well as to support the development of effective regulations to mitigate the impacts.”
In total, 13,755,951 hospitalizations for cardiovascular diseases and 7,990,910 for respiratory diseases were recorded among residents of all ages across the 15 U.S. states (Arizona, Colorado, Delaware, Georgia, Iowa, Kentucky, Michigan, Minnesota, North Carolina, New Jersey, New York, Oregon, Rhode Island, Washington, and Wisconsin). They were linked across time and location, using residential addresses, to smoke PM2.5 exposures between 2006 and 2016. Among the conditions studied, hypertension showed the greatest increase in hospitalization risk associated with smoke PM2.5 exposure.
The team developed and employed a novel self-controlled design within a cohort framework to mimic a quasi-experimental study. Under a cohort framework, researchers did not randomly assign participants to different smoke exposure levels; rather, they tracked a participant’s health and smoke exposure over the defined time period, which can introduce bias due to unmeasured confounders. This self-controlled design automatically addresses factors that don’t change or change slowly over time — like genetics — even if they aren’t measured, because each person is compared to themselves. This self-matching format improves the reliability of study findings.
“Wildfires can burn for weeks to a month, and smoke PM2.5 may linger in the air for extended periods, which may keep the air toxic even after a wildfire has ended,” said Wei. “Current wildfire strategies are outdated and ineffective. For example, prescribed fires can reduce wildfire risk but are mainly used to protect property rather than public health. Greater effort should be placed on wildfire management rather than relying solely on traditional air quality control strategies in response to the increasing wildfire activity.”
“As wildfires become more frequent and intense, and their burden on human health becomes clearer, addressing the health impacts is a critical public health priority,” said Rosalind Wright, dean for public health and chair of the Department of Public Health at the Icahn School of Medicine. “The public and clinicians should take preventive measures during and after wildfires, such as wearing masks and using high efficiency particulate air (HEPA) filters, which are becoming more affordable.
“Findings from this study underscore the need to continue such preventive measure for a prolonged period after the fires have ended. Collaborative efforts across federal, state, and local levels are essential to safeguard the health of communities nationwide,” said Wright, who is also the Horace W. Goldsmith Professor in Life Course Health Research in the Departments of Public Health and Environmental Medicine; co-director of the Institute for Exposomic Research; and director of ConduITS, the Clinical and Translational Science Award (CTSA) Program at Mount Sinai.
Other Harvard Chan co-authors included Edgar Castro, Alexandra Shtein, Bryan Vu, Yuxi Liu, Adjani Peralta, and Joel Schwartz.
This study was funded by the National Institutes of Health.
The government recently terminated a grant supporting Kelly Rich’s research into ALS.Veasey Conway/Harvard Staff Photographer
Health
Young researcher’s ALS attack plan now a no-go
Career award among deep funding cuts affecting David Sinclair’s disease-fighting lab
Alvin Powell
Harvard Staff Writer
June 4, 2025
5 min read
When Kelly Rich was at Ohio State, her research into ALS and other n
Career award among deep funding cuts affecting David Sinclair’s disease-fighting lab
Alvin Powell
Harvard Staff Writer
5 min read
When Kelly Rich was at Ohio State, her research into ALS and other neuromuscular diseases was promising enough to win her a career training award from the National Institute for Aging.
Unlike federal grants that are tied to specific institutions, the training award is intended to support young researchers as they complete doctoral studies and transition from student to independent scientist, so Rich could have taken the grant anywhere. She chose the Harvard Medical School lab of David Sinclair because of his reputation for work that flips on its head conventional thinking around aging and disease.
Many of the health threats we fear the most, including heart disease, cancer, and dementia, occur more frequently with age. Medical research has answered or is striving to answer the most pressing questions about these conditions: What goes wrong in the body, how that malfunction translates into disease, and how we can fight back.
Sinclair looks at the situation more broadly: If the risk of a given condition rises with age, what is it about aging that increases the risk? His reasoning is that if we can find the answer to that question, we might be able to lower the risk of several diseases at once.
Researchers in his and other labs have made progress toward that goal in recent years. In 2020, Sinclair and colleagues — supported in part by the National Institutes of Health and NASA — made the cover of the journal Nature after they turned back the cellular clock in the optic nerves of lab mice. The team restored the neurons’ ability to generate new axons — the connection between the eye and the brain — and repaired vision in mice with glaucoma and in older mice whose vision was fading with age.
David Sinclair is scrambling to replace two major federal grants that funded his lab’s work on aging and disease.
Harvard file photo
Since the eye’s nerve cells are the same type as neurons in the brain, the work attracted Rich to Harvard. She believes the approach used to reinvigorate the mouse optic nerve holds promise as a way to fight currently incurable conditions like amyotrophic lateral sclerosis and spinal muscular atrophy.
In recent weeks, however, her work has hit a roadblock. Rich, Sinclair, and other researchers across campus received grant termination notices linked to the government’s campaign to force Harvard to comply with proposed changes to governance and hiring, as well as audits of faculty and student viewpoints.
“We’re all just looking at each other going, ‘What the hell just happened?’ and it’s slowly sinking in,” Sinclair said. “We’re really trying to be supportive to our lab members, the students who’ve staked their lives on this, and the postdocs, who are the ones most likely to be let go. It’s extremely terrifying for them.”
Sinclair’s lab lost two major grants — a five-year, $1.5 million grant that provides its financial foundation and Rich’s career award of $438,000 over six years, which funds her salary plus a technician to support her research.
“I’m still letting the dust settle a bit,” Rich said. “This is an especially tough moment for early career scientists and postdocs and students. The effect of these cuts is to take early career scientists who want to be the next generation of academic leaders and erode their confidence, their trust in the infrastructure that has driven science and science careers. Academia looks very different today.”
The termination of Sinclair’s R01 grant — the NIH funding that supports his lab — casts his entire research program into doubt. He thinks that he has enough resources to avoid layoffs for a few months, but he has been scrambling to find private grants to replace federal funds.
“I’m traveling around the country and the world to see if I can raise money to keep going and it’s just a race against time,” he said. “I can keep going for a while, maybe a few months, but ultimately, we relied on government money. So I’m looking for support from companies and the public to replace what was lost.”
Despite the unsettled environment, Rich said she’s intent on not making “snap decisions” about her next steps. Her preference is to continue to work with Sinclair and move on to an academic career, but she’s also not averse to transitioning toward an industry role, especially with the longstanding research partnership between the federal government and higher ed now in jeopardy.
“I’m going to stick with where I am for as long as I can right now,” she said. “But I don’t think you can ignore the fact that it seems tougher now than ever to start a lab in academia, given that the infrastructure of federal funding so many early career professors rely on is largely gone. I haven’t made any decisions, but these are things that you can’t ignore. You have to be practical when you’re making decisions about that next step.”
“Close your eyes and imagine we are on the same team. Same arena. Same jersey. And the game is on the line,” Jaylen Brown, the 2024 NBA Finals MVP for the Boston Celtics, said to a packed room of about 200 people at the recent Day of Climate event at the MIT Museum.“Now think about this: We aren’t playing for ourselves; we are playing for the next generation,” Brown added, encouraging attendees to take climate action. The inaugural Day of Climate event brought together local learners, educators,
“Close your eyes and imagine we are on the same team. Same arena. Same jersey. And the game is on the line,” Jaylen Brown, the 2024 NBA Finals MVP for the Boston Celtics, said to a packed room of about 200 people at the recent Day of Climate event at the MIT Museum.
“Now think about this: We aren’t playing for ourselves; we are playing for the next generation,” Brown added, encouraging attendees to take climate action.
The inaugural Day of Climate event brought together local learners, educators, community leaders, and the MIT community. Featuring project showcases, panels, and a speaker series, the event sparked hands-on learning and inspired climate action across all ages.
The event marked the celebration of the first year of a larger initiative by the same name. Led by the pK-12 team at MIT Open Learning, Day of Climate has brought together learners and educators by offering free, hands-on curriculum lessons and activities designed to introduce learners to climate change, teach how it shapes their lives, and consider its effects on humanity.
Cynthia Breazeal, dean of digital learning at MIT Open Learning, notes the breadth of engagement across MIT that made the event, and the larger initiative, possible with contributions from more than 10 different MIT departments, labs, centers, and initiatives.
“MIT is passionate about K-12 education,” she says. “It was truly inspiring to witness how our entire community came together to demonstrate the power of collaboration and advocacy in driving meaningful change.”
From education to action
The event kicked off with a showcase, where the Day of Climate grantees and learners invited attendees to learn about their projects and meaningfully engage with lessons and activities. Aranya Karighattam, a local high school senior, adapted the curriculum Urban Heat Islands — developed by Lelia Hampton, a PhD student in electrical engineering and computer science at MIT, and Chris Rabe, program director at the MIT Environmental Solution Initiative — sharing how this phenomenon affects the Boston metropolitan area.
Karighattam discussed what could be done to shield local communities from urban heat islands. They suggested doubling the tree cover in areas with the lowest quartile tree coverage as one mitigating strategy, but noted that even small steps, like building a garden and raising awareness for this issue, can help.
Day of Climate echoed a consistent call to action, urging attendees to meaningfully engage in both education and action. Brown, who is an MIT Media Lab Director’s Fellow, spoke about how education and collective action will pave the way to tackle big societal challenges. “We need to invest in sustainability communities,” he said. “We need to invest in clean technology, and we need to invest in education that fosters environmental stewardship.”
Part of MIT’s broader sustainability efforts, including The Climate Project, the event reflected a commitment to building a resilient and sustainable future for all. Influenced by the Climate Action Through Education (CATE), Day of Climate panelist Sophie Shen shared how climate education inspired her civic life. “Learning about climate change has inspired me to take action on a wider systemic level,” she said.
Shen, a senior at Arlington High School and local elected official, emphasized how engagement and action looks different for everyone. “There are so many ways to get involved,” she said. “That could be starting a community garden — those can be great community hubs and learning spaces — or it could include advocating to your local or state governments.”
Becoming a catalyst for change
The larger Day of Climate initiative encourages young people to understand the interdisciplinary nature of climate change and consider how the changing climate impacts many aspects of life. With curriculum available for learners from ages 4 to 18, these free activities range from Climate Change Charades — where learners act out words like “deforestation” and “recycling” — to Climate Change Happens Below Water, where learners use sensors to analyze water quality data like pH and solubility.
Many of the speakers at the event shared personal anecdotes from their childhood about how climate education, both in and out of the classroom, has changed the trajectory of their lives. Addaline Jorroff, deputy climate chief and director of mitigation and community resilience in the Office of Climate Resilience and Innovation for the Commonwealth of Massachusetts, explained how resources from MIT were instrumental in her education as a middle and high schooler, while Jaylen Brown told how his grandmother helped him see the importance of taking care of the planet, through recycling and picking up trash together, when he was young.
Claudia Urrea, director of the pK-12 team at Open Learning and director of Day of Climate, emphasizes how providing opportunities at schools — through new curriculum, classroom resources and mentorship — are crucial, but providing other educational opportunities also matter: in particular, opportunities that support learners in becoming strong leaders.
“I strongly believe that this event not only inspired young learners to take meaningful action, both large and small, towards a better future, but also motivated all the stakeholders to continue to create opportunities for these young learners to emerge as future leaders,” Urrea says.
The team plans to hold the Day of Climate event annually, bringing together young people, educators, and the MIT community. Urrea hopes the event will act as a catalyst for change — for everyone.
“We hope Day of Climate serves as the opportunity for everyone to recognize the interconnectedness of our actions,” Urrea says. “Understanding this larger system is crucial for addressing current and future challenges, ultimately making the world a better place for all.”
The Day of Climate event was hosted by the Day of Climate team in collaboration with MIT Climate Action Through Education (CATE) and Earth Day Boston.
When people think of MIT, they may first think of code, circuits, and cutting-edge science. But the school has a rich history of interweaving art, science, and technology in unexpected and innovative ways — and that’s never been more clear than with the Institute’s latest festival, Artfinity: A Celebration of Creativity and Community at MIT.After an open-call invitation to the MIT community in early 2024, the inaugural Artfinity delivered an extended multi-week exploration of art and ideas, with
When people think of MIT, they may first think of code, circuits, and cutting-edge science. But the school has a rich history of interweaving art, science, and technology in unexpected and innovative ways — and that’s never been more clear than with the Institute’s latest festival, Artfinity: A Celebration of Creativity and Community at MIT.
After an open-call invitation to the MIT community in early 2024, the inaugural Artfinity delivered an extended multi-week exploration of art and ideas, with more than 80 free performing and visual arts events between Feb. 15 and May 2, including a two-day film festival, interactive augmented reality art installations, an evening at the MIT Museum, a simulated lunar landing, and concerts by both student groups and internationally renowned musicians.
“Artfinity was a fantastic celebration of MIT’s creative excellence, offering so many different ways to explore our thriving arts culture,” says MIT president Sally Kornbluth. “It was wonderful to see people from our community getting together with family, friends, and neighbors from Cambridge and Boston to experience the joy of music and the arts.”
Among the highlights were a talk by Tony-winning scenic designer Es Devlin, a concert by Grammy-winning rapper and visiting scholar Lupe Fiasco, and a series of events commemorating the opening of the Edward and Joyce Linde Music Building.
Devlin shared art tied to her recent spring residency at MIT as the latest honoree of the Eugene McDermott Award in the Arts. Working with MIT faculty, students, and staff, she inspired a site-specific installation called “Face to Face,” in which more than 100 community members were paired with strangers to draw each other. In recent years, Devlin has focused her work on fostering interpersonal connection, as in her London multimedia exhibition “Congregation,” in which she drew 50 people displaced from their homelands and documented their stories on video.
Fiasco’s May 2 performance centered around a new project inspired by MIT’s public art collection, developed this year in collaboration with students and faculty as part of his work as a visiting scholar and teaching the class “Rap Theory and Practice.” With the backing of MIT’s Festival Jazz Ensemble, Fiasco presented original compositions based on famed campus sculptures such as Alexander Calder’s La Grande Voile [The Big Sail] and Jaume Plensa’s Alchemist, with members of the MIT Rap Ensemble also jumping on board for many of the pieces. Several students in the ensemble also spearheaded complex multi-instrument arrangements of some of Fiasco’s most popular songs, including “The Show Goes On” and “Kick, Push.”
Artfinity’s programming also encompassed an eclectic mix of concerts commemorating the new Linde Music Building, which features the 390-seat Tull Hall, rehearsal rooms, a recording studio, and a research lab to help support a new music technology graduate program launching this fall. Events included performances of multiple student ensembles, the Boston Symphony Chamber Players, the Boston Chamber Music Society, Sanford Biggers’ group Moonmedicin, and Grammy-winning jazz saxophonist Miguel Zenón, an assistant professor of music at MIT.
“Across campus, from our new concert hall to the Great Dome, in gallery spaces and in classrooms, our community was inspired by the visual and performing arts of the Artfinity festival,” says MIT provost Cynthia Barnhart. “Artfinity has been an incredible celebration and display of the collective creativity and innovative spirit of our community of students, faculty, and staff.”
A handful of other Artfinity pieces also made use of MIT’s iconic architecture, including Creative Lumens and Media Lab professor Behnaz Farahi’s “Gaze to the Stars.” Taking place March 12–14 and coinciding with the total lunar eclipse, the large-scale video projections illuminated a wide range of campus buildings, transforming the exteriors of the new Linde Music Building, the MIT Chapel, the Stratton Student Center, the Zesiger Sports & Fitness Center, and even the Great Dome, which Farahi’s team affixed with images of eyes from the MIT community.
Other popular events included the MIT Museum’s After Dark series and its Argus Installation, which examined the interplay of light and hand-blown glass. A two-day Bartos Theatre film festival featured works by students, staff, and faculty, ranging from shorts to 30-minute productions, and spanning the genres of fiction, nonfiction, animation, and experimental pieces. The Welcome Center also hosted “All Our Relations,” a multimedia celebration of MIT's Indigenous community through song, dance, and story.
An Institute event, Artfinity was organized by the Office of the Arts, and led by professor of art, culture, and technology Azra Akšamija and Institute Professor of Music Marcus A. Thompson. Both professors spoke about the importance of spotlighting the arts and demonstrating a diverse breadth and depth of programming for future iterations of the event.
“People think of MIT as a place you go to only for technology. But, in reality, MIT has always attracted students with broad interests and required them to explore balance in their programs with substantive world-class offerings in the humanities, social sciences, and visual and performing arts,” says Thompson. “We are hoping this festival, Artfinity, will showcase the infinite variety and quality we have been offering and actually doing in the arts for quite some time.”
Professor of music and theater art Jay Scheib sees the mix of art and technology as a way for students to explore other ways for them to approach different research challenges. “In the arts, we tend to look at problems in a different way … framed by ideas of aesthetics, civic discourse, and experience,” says Scheib. “This approach can help students in physics, aerospace design, or artificial intelligence to ask different, yet equally useful, questions.”
An Institute-sponsored campus-wide event organized by the Office of the Arts, Artfinity represents MIT’s largest arts festival since its 150th anniversary in 2011. Akšamija, who is director of MIT’s Art, Culture, and Technology (ACT) program, says that the festival serves as both a student spotlight and an opportunity to interact with, and meaningfully give back to, MIT’s surrounding community in Cambridge and greater Boston.
“What became evident during the planning of this festival was the quantity and quality of art here at MIT, and how much of that work is cutting-edge,” says Akšamija. “We wanted to celebrate the creativity and joyfulness of the brilliant minds on campus [and] to bring joy and beauty to MIT and the surrounding community.”
Rapper Lupe Fiasco performs works from his project, “Ghotiing,” with the MIT Festival Jazz Ensemble on May 2 in Kresge Auditorium as the culmination of Artfinity.
With a dramatic victory in the 4x400m relay, the MIT women's track and field team clinched the 2025 NCAA Division III Outdoor Track and Field National Championship May 24 at the SPIRE Institute's Outdoor Track and Field facility. The title was MIT's first NCAA women's outdoor track and field national championship. The team scored first of 79 with 56 points; runners-up included Washington University with 47 points and the University of Winsconsin at La Crosse with 38 points.With the victory, MIT
With a dramatic victory in the 4x400m relay, the MIT women's track and field team clinched the 2025 NCAA Division III Outdoor Track and Field National Championship May 24 at the SPIRE Institute's Outdoor Track and Field facility. The title was MIT's first NCAA women's outdoor track and field national championship. The team scored first of 79 with 56 points; runners-up included Washington University with 47 points and the University of Winsconsin at La Crosse with 38 points.
With the victory, MIT completed a sweep of the 2024-25 NCAA Division III women's cross country, indoor track and field, and outdoor track and field titles — becoming the first women's program to sweep all three in the same year.
MIT earned 20 All-America honors across three days, including the program's first relay national championship in the 4x400m on Saturday and Alexis Boykin's eighth career national title with an NCAA record-breaking performance in the shot put on Friday.
On Thursday, Boykin opened the championships with a third-place performance in the discus as MIT quickly moved to the top of the team leaderboard on the first day of competition. Boykin and classmate Emily Ball each earned a spot on the podium. Boykin was third with a throw of 45.12m (148' 0") on her second attempt and Ball was seventh with a mark of 41.90m (137' 5") on her final throw of prelims.
In the pole vault, junior Katelyn Howard tied for fifth, clearing 3.85m (12' 7.5") to pick up three points for MIT. Howard passed on the first height and cleared at both 3.75m and 3.85m, but did not pass the fourth progression. Classmate Hailey Surace was 14th, clearing 3.75m (12' 3.5").
Junior Elaine Wang picked up a big point with an eighth-place finish for MIT in the javelin. Wang's second attempt traveled 40.44m (132' 8"), moving her into sixth place. She would eventually finish in eighth on the strength of her second attempt.
The opening day concluded with junior Kate Sanderson finishing fourth with a personal best of 34:48.601 in the 10,000m to earn a spot on the podium, as MIT continued to lead the team standings.
On Friday, Boykin returned on day two and set the NCAA Division III women's shot put all-time record, winning her eighth career national championship with a throw of 16.80m (55’ 1/2”). Boykin won the event by over 2 meters, breaking Robyn Jarocki's NCAA Division III record on her final preliminary attempt with a throw of 16.80m.
MIT wrapped action with the 3,000m Steeplechase final, where sophomore Liv Girand finished in 10th place in 10:58.71 to earn the first All-America honor of her career. MIT continued to lead the team standings at the end of the second day of competition.
On Saturday, Boykin earned her third All-America honor in three events at the championships with a third-place finish in the hammer with a throw of 58.79m (192' 10”), while junior Nony Otu Ugwu took 10th with a jump of 11.91m (39' 1") on her final attempt of prelims. Otu Ugwu did not advance to the final.
MIT shined on the track to secure the title, as grad student Gillian Roeder and senior Christina Crow picked up seven big points in the 1,500m final. Roeder was fifth in 4:27.76 and Crow was one spot back, finishing sixth in 4:28.81.
Senior Marina Miller followed and picked up six more points while earning the first of two All-America honors on the day with a third-place finish and a personal record of 54.32 in the 400m.
Junior Rujuta Sane, Roeder, and junior Kate Sanderson finished 13th, 14th, and 16th, respectively, in the 5,000m. Sane had a time of 16:51.45, with Roeder finishing in 16:54.07 and Sanderson clocking in at 17:00.55.
With MIT leading second-place Washington University by seven points heading into the final event, MIT's 4x4 relay team of senior Olivia Dias, junior Shreya Kalyan, junior Krystal Montgomery, and Miller left no doubt, securing the team championship with a national title of their own, as Miller moved from third to first over the final 50m to win an electric final race.
Tiffany Onyeiwu ’25 blows bubbles during the 374th Harvard University Commencement Exercises in Tercentenary Theatre.Stephanie Mitchell/Harvard Staff Photographer
Campus & Community
To boldly go
Veasey Conway
Harvard Staff Photographer
June 4, 2025
2 min read
Class of ’25 heads for new frontiers
More than 9,000 graduates from the Class of 2025, representing all of Harvard University’
More than 9,000 graduates from the Class of 2025, representing all of Harvard University’s Schools, streamed into Harvard Yard on May 29 for Harvard’s 374th Commencement Exercises.
The graduates, most wearing black gowns and bits of crimson, processed into Tercentenary Theatre. Families, friends, and well-wishers filled the steps of Widener Library, spreading out across the Yard.
Some graduates wore colorful stoles and turned their mortarboards into canvases for words of wisdom and inspiration — “I Believe in Myself”; “Look at Me Go”; “She Made It Happen.” Harvard Kennedy School students tossed beach ball globes in the air. The Law School waved gavels.
Sunshine found its way through gray skies as the graduates listened to author and physician Abraham Verghese deliver the principal Commencement address from in front of Memorial Church.
“Graduates, the decisions you make in the future when under pressure will say something about your character, while they will also shape and transform you in unexpected ways,” Verghese said. “Make your decisions worthy of those who supported, nurtured, and sacrificed for you: your parents, your partners, your family, your ancestors.”
Harvard University President Alan Garber (center) processes to Tercentenary Theatre.
Stephanie Mitchell/Harvard Staff Photographer
Graduates process past Widener Library.
Photo by Grace DuVal
Peter Koutoujian (center), sheriff of Middlesex County, calls the meeting to order.
Veasey Conway/Harvard Staff Photographer
Rakesh Khurana confers degrees for the last time as Danoff Dean of Harvard College.
Photo by Grace DuVal
Graduates from Kenneth C. Griffin Graduate School of Arts and Sciences celebrate.
Niles Singer/Harvard Staff Photographer
Alexandra Nebel wears a hand-embroidered hat to celebrate Commencement.
Photo by Grace DuVal
Graduates from the Harvard T.H. Chan School of Public Health hold up red hand clappers, which honor hand washing as a cornerstone of public health.
Veasey Conway/Harvard Staff Photographer
Rita Moreno (center) smiles during the conferral of honorary degrees.
Photo by Grace DuVal
Richard B. Alley gives a thumbs up after receiving an honorary degree.
Niles Singer/Harvard Staff Photographer
Commencement speaker Abraham Verghese responds to applause after receiving his honorary degree.
Photo by Grace DuVal
Graduates line up to receive their diplomas at Eliot House.
Stephanie Mitchell/Harvard Staff Photographer
Family members watch and photograph the Eliot House diploma presentation ceremony.
Stephanie Mitchell/Harvard Staff Photographer
Graduating seniors pause and reflect during the valediction for the Class of 2025.
Veasey Conway/Harvard Staff Photographer
Facing the stage in Tercentenary Theatre, a banner decorated with the Veritas shield adorns the columns of Widener Library.
The research team, led by the University of Cambridge, used the GPT-4 large language model (LLM) to identify hidden patterns buried in the mountains of scientific literature to identify potential new cancer drugs.
To test their approach, the researchers prompted GPT-4 to identify potential new drug combinations that could have a significant impact on a breast cancer cell line commonly used in medical research. They instructed it to avoid standard cancer drugs, identify drugs that would attack c
The research team, led by the University of Cambridge, used the GPT-4 large language model (LLM) to identify hidden patterns buried in the mountains of scientific literature to identify potential new cancer drugs.
To test their approach, the researchers prompted GPT-4 to identify potential new drug combinations that could have a significant impact on a breast cancer cell line commonly used in medical research. They instructed it to avoid standard cancer drugs, identify drugs that would attack cancer cells while not harming healthy cells, and prioritise drugs that were affordable and approved by regulators.
The drug combinations suggested by GPT-4 were then tested by human scientists, both in combination and individually, to measure their effectiveness against breast cancer cells.
In the first lab-based test, three of the 12 drug combinations suggested by GPT-4 worked better than current breast cancer drugs. The LLM then learned from these tests and suggested a further four combinations, three of which also showed promising results.
The results, reported in the Journal of the Royal Society Interface, represent the first instance of a closed-loop system where experimental results guided an LLM, and LLM outputs – interpreted by human scientists – guided further experiments. The researchers say that tools such as LLMs are not a replacement for scientists, but could instead be supervised AI researchers, with the ability to originate, adapt and accelerate discovery in areas like cancer research.
Often, LLMs such as GPT-4 return results that aren’t true, known as hallucinations. However, in scientific research, hallucinations can sometimes be beneficial if they lead to new ideas that are worth testing.
“Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn’t thought of before,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “This can be useful in areas such as drug discovery, where there are many thousands of compounds to search through.”
Based on the prompts provided by the human scientists, GPT-4 selected drugs based on the interplay between biological reasoning and hidden patterns in the scientific literature.
“This is not automation replacing scientists, but a new kind of collaboration,” said co-author Dr Hector Zenil from King’s College London. “Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner—rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach.”
The hallucinations – normally viewed as flaws – became a feature, generating unconventional combinations worth testing and validating in the lab. The human scientists inspected the mechanistic reasons the LLM found to suggest these combinations in the first place, feeding the system back and forth in multiple iterations.
By exploring subtle synergies and overlooked pathways, GPT-4 helped identify six promising drug pairs, all tested through lab experiments. Among the combinations, simvastatin (commonly used to lower cholesterol) and disulfiram (used in alcohol dependence) stood out against breast cancer cells. Some of these combinations show potential for further research in therapeutic repurposing.
These drugs, while not traditionally associated with cancer care, could be potential cancer treatments, although they would first have to go through extensive clinical trials.
“This study demonstrates how AI can be woven directly into the iterative loop of scientific discovery, enabling adaptive, data-informed hypothesis generation and validation in real time,” said Zenil.
“The capacity of supervised LLMs to propose hypotheses across disciplines, incorporate prior results, and collaborate across iterations marks a new frontier in scientific research,” said King. “An AI scientist is no longer a metaphor without experimental validation: it can now be a collaborator in the scientific process.”
The research was supported in part by the Alice Wallenberg Foundation and the UK Engineering and Physical Sciences Research Council (EPSRC).
An ‘AI scientist’, working in collaboration with human scientists, has found that combinations of cheap and safe drugs – used to treat conditions such as high cholesterol and alcohol dependence – could also be effective at treating cancer, a promising new approach to drug discovery.
The accumulation of microplastics in the environment, and within our bodies, is an increasingly worrisome issue. But predicting where these ubiquitous particles will accumulate, and therefore where remediation efforts should be focused, has been difficult because of the many factors that contribute to their dispersal and deposition.New research from MIT shows that one key factor in determining where microparticles are likely to build up has to do with the presence of biofilms. These thin, sticky
The accumulation of microplastics in the environment, and within our bodies, is an increasingly worrisome issue. But predicting where these ubiquitous particles will accumulate, and therefore where remediation efforts should be focused, has been difficult because of the many factors that contribute to their dispersal and deposition.
New research from MIT shows that one key factor in determining where microparticles are likely to build up has to do with the presence of biofilms. These thin, sticky biopolymer layers are shed by microorganisms and can accumulate on surfaces, including along sandy riverbeds or seashores. The study found that, all other conditions being equal, microparticles are less likely to accumulate in sediment infused with biofilms, because if they land there, they are more likely to be resuspended by flowing water and carried away.
The open-access findings appear in the journal Geophysical Research Letters, in a paper by MIT postdoc Hyoungchul Park and professor of civil and environmental engineering Heidi Nepf. “Microplastics are definitely in the news a lot,” Nepf says, “and we don’t fully understand where the hotspots of accumulation are likely to be. This work gives a little bit of guidance” on some of the factors that can cause these particles, and small particles in general, to accumulate in certain locations.
Most experiments looking at the ways microparticles are transported and deposited have been conducted over bare sand, Park says. “But in nature, there are a lot of microorganisms, such as bacteria, fungi, and algae, and when they adhere to the stream bed they generate some sticky things.” These substances are known as extracellular polymeric substances, or EPS, and they “can significantly affect the channel bed characteristics,” he says. The new research focused on determining exactly how these substances affected the transport of microparticles, including microplastics.
The research involved a flow tank with a bottom lined with fine sand, and sometimes with vertical plastic tubes simulating the presence of mangrove roots. In some experiments the bed consisted of pure sand, and in others the sand was mixed with a biological material to simulate the natural biofilms found in many riverbed and seashore environments.
Water mixed with tiny plastic particles was pumped through the tank for three hours, and then the bed surface was photographed under ultraviolet light that caused the plastic particles to fluoresce, allowing a quantitative measurement of their concentration.
The results revealed two different phenomena that affected how much of the plastic accumulated on the different surfaces. Immediately around the rods that stood in for above-ground roots, turbulence prevented particle deposition. In addition, as the amount of simulated biofilms in the sediment bed increased, the accumulation of particles also decreased.
Nepf and Park concluded that the biofilms filled up the spaces between the sand grains, leaving less room for the microparticles to fit in. The particles were more exposed because they penetrated less deeply in between the sand grains, and as a result they were much more easily resuspended and carried away by the flowing water.
“These biological films fill the pore spaces between the sediment grains,” Park explains, “and that makes the deposited particles — the particles that land on the bed — more exposed to the forces generated by the flow, which makes it easier for them to be resuspended. What we found was that in a channel with the same flow conditions and the same vegetation and the same sand bed, if one is without EPS and one is with EPS, then the one without EPS has a much higher deposition rate than the one with EPS.”
Nepf adds: “The biofilm is blocking the plastics from accumulating in the bed because they can’t go deep into the bed. They just stay right on the surface, and then they get picked up and moved elsewhere. So, if I spilled a large amount of microplastic in two rivers, and one had a sandy or gravel bottom, and one was muddier with more biofilm, I would expect more of the microplastics to be retained in the sandy or gravelly river.”
All of this is complicated by other factors, such as the turbulence of the water or the roughness of the bottom surface, she says. But it provides a “nice lens” to provide some suggestions for people who are trying to study the impacts of microplastics in the field. “They’re trying to determine what kinds of habitats these plastics are in, and this gives a framework for how you might categorize those habitats,” she says. “It gives guidance to where you should go to find more plastics versus less.”
As an example, Park suggests, in mangrove ecosystems, microplastics may preferentially accumulate in the outer edges, which tend to be sandy, while the interior zones have sediment with more biofilm. Thus, this work suggests “the sandy outer regions may be potential hotspots for microplastic accumulation,” he says, and can make this a priority zone for monitoring and protection.
“This is a highly relevant finding,” says Isabella Schalko, a research scientist at ETH Zurich, who was not associated with this research. “It suggests that restoration measures such as re-vegetation or promoting biofilm growth could help mitigate microplastic accumulation in aquatic systems. It highlights the powerful role of biological and physical features in shaping particle transport processes.”
The work was supported by Shell International Exploration and Production through the MIT Energy Initiative.
One key factor in determining where microparticles are likely to build up has to do with the presence of biofilms — thin, sticky biopolymer layers shed by microorganisms, which can accumulate on surfaces, including sandy riverbeds or seashores.
Science & Tech
Social media fueled divisions. Teaming up may help heal.
Ph.D. candidate Lucas Woodley, the paper’s lead author (left), with Professor of Psychology Joshua D. Greene.Photo by Dylan Goodman
Christy DeSmith
Harvard Staff Writer
June 3, 2025
6 min read
Study finds pairing members of opposing parties on the same side to compete in specially designed quiz eases partisanship
Social media fueled divisions. Teaming up may help heal.
Ph.D. candidate Lucas Woodley, the paper’s lead author (left), with Professor of Psychology Joshua D. Greene.
Photo by Dylan Goodman
Christy DeSmith
Harvard Staff Writer
6 min read
Study finds pairing members of opposing parties on the same side to compete in specially designed quiz eases partisanship
Algorithm-driven digital feeds have deepened the split between red and blue America. But a new online tool may help bring the country back together.
The virtual quiz game Tango, developed at Harvard, pairs Democrats and Republicans on common teams, where bipartisanship quickly emerges as a competitive advantage. “It’s really the opposite of the nasty, divisive posting you find on social media,” offered Tango co-creator Joshua D. Greene ’97, a professor of psychology and co-author of new study measuring the game’s impact.
The results, published this week in the journal Nature Human Behaviour, showed decreased negative partisanship, increased warmth, and even financial generosity between U.S. players from opposing parties. The effect was comparable, the authors wrote, to rolling back approximately 15 years of rising polarization in American political life.
What’s more, the changes persisted long after just one hour of play.
“We see over and over again that the effects last at least a month and often up to four months from playing just once,” reported Greene, whose research partners include current and former Kenneth C. Griffin Graduate School of Arts and Sciences students.
“We see over and over again that the effects last at least a month and often up to four months from playing just once.”
Joshua D. Greene
Students Shira Li (left) and Shlok Goyal playing Tango at a special event in the Graduate Student Lounge in Lehman Hall.
“I was happy with the book,” he recalled. “But it was very philosophical and offered little practical guidance for real-world problem-solving. For me, that provoked a kind of reckoning about how to spend my time.”
The experimental psychologist, neuroscientist, and philosopher soon found himself returning to core principles in both the life and social sciences.
“They were pointing in the same direction,” he said. “That is, everything is really about mutually beneficial cooperation. Molecules come together to form cells; cells form more complicated cells and colonies and creatures with organs that cooperate. Individuals form societies and tribes and chiefdoms and nations and occasionally United Nations.
“At every single level,” Greene continued, “the reason the world isn’t just primordial soup is because parts can come together to form wholes that can accomplish more together than they can separately.”
But how to move Americans toward this evolutionary sweet spot?
Former graduate student Evan DeFilippis, M.A ’19, M.B.A. ’22, was enlisted to help dream up solutions that were more scalable and a lot more fun than the standard dialogue circle. Later joining the project team were Shankar Ravi, M.Ed. ’22, a data/software specialist in Greene’s lab, and Lucas Woodley ’23, a Ph.D. candidate studying psychology and the new paper’s lead author.
As an undergraduate economics and psychology concentrator, Woodley got interested in conflict resolution after taking a Gen Ed course taught by Harvard Medical School’s Daniel Shapiro. Woodley and a few classmates went on to author a book on negotiation, featuring a free, hands-on curriculum for faculty and students.
“People reached out to say, ‘Hey, I tried out this exercise in my class, and it worked really well — the students had a lot of fun.’ But it was always on such a small scale,” Woodley said. “As I was thinking ‘Wouldn’t it be nice to have an intervention that was much more scalable while still keeping that element of enjoyment?’ I was applying to graduate schools and ran into Josh.”
With the help of the Washington, D.C.-based Global Development Incubator, the Tango project team engineered a platform that presents players across the U.S. with three rounds of trivia. Field-testing has yielded a set of questions that reliably reveal partisan knowledge gaps. Some cover cultural terrain, advantaging either Democrats (“Who are the main characters from ‘Stranger Things’ on Netflix?”) or Republicans (“Name the family from ‘Duck Dynasty.’”).
Other questions are crafted to affirm or challenge partisan beliefs. For example, Americans on the left are more likely to know that immigrants in the U.S. commit relatively few crimes. Right-leaning players know relatively few gun deaths involve assault-style weapons.
“We build in uncomfortable truths for both sides. People still left us comments saying they want to play again.”
Lucas Woodley
“We build in uncomfortable truths for both sides,” Woodley said. “People still left us comments saying they want to play again.”
Throughout, two-person teams rely on Tango’s chat function to coordinate answers. As Woodley pointed out, this invites debate as well as mini-celebrations of a partner’s contributions. “That seems to be what makes the game so effective,” he said.
The academic paper represents five randomized controlled trials based on nearly 5,000 U.S. players who were recruited online. In one of the experiments, Democrats and Republicans were given $100 to allot as they like. Those who had teamed up with a political rival proved far more generous with members of the opposing party.
Eventually, the Tango team hopes for regularly scheduled sessions where Americans at large can join in for game night at letstango.org. Woodley also envisions bar-goers encountering Tango at the local watering hole. But for now, they’ve cooked up other creative ways of distributing a game that requires simultaneity.
With support from the President’s Building Bridges Fund, Woodley partnered this spring with undergraduate Houses and a GSAS student group to stage a two-hour Tango event. The project team has already reached thousands of undergraduates via rollouts on other U.S. campuses and recently wrapped up its first trial with employees at a Fortune 500 company.
“A lot of business leaders feel like they’re being forced to pick sides — and they just want to sell a product,” said Greene, who envisions for-profit entities incentivizing Tango one day via coupons and other customer perks.
As polarization surges globally, the team is also at work customizing Tango for a variety of national contexts. Pilot testing is underway in Israel, with questions for India and Northern Ireland in the works.
Constructive dialogue is still a critical intervention in these divided societies, Greene said. “But what makes dialogue even possible is that basic sense of mutual respect and openness — of thinking ‘this person is on my team.’
“So, what we’re trying to do with this game is expand the definition of ‘us,’” he added. “It’s less like two smart humans having a debate about immigration and more like two chimps picking bugs out of each other’s fur.”
Stanley Fischer PhD ’69, MIT professor emeritus of economics and a towering figure in both academic macroeconomics and global economic policymaking, passed away on May 31. He was 81. Fischer was a foundational scholar as well as a wise mentor and a central force in shaping the macroeconomic tradition of MIT’s Department of Economics that continues today.“Together with Rudi Dornbusch and later Olivier Blanchard, Stan was one of the intellectual engines that powered MIT macroeconomics in the 1970s
Stanley Fischer PhD ’69, MIT professor emeritus of economics and a towering figure in both academic macroeconomics and global economic policymaking, passed away on May 31. He was 81. Fischer was a foundational scholar as well as a wise mentor and a central force in shaping the macroeconomic tradition of MIT’s Department of Economics that continues today.
“Together with Rudi Dornbusch and later Olivier Blanchard, Stan was one of the intellectual engines that powered MIT macroeconomics in the 1970s and beyond,” says Ricardo Caballero PhD ’88, one of Fischer’s advisees and now the Ford International Professor of Economics at MIT. “He was quietly brilliant, never flashy, and always razor-sharp. His students learned not just from his lectures or his groundbreaking work on New Keynesian models and rational expectations, but from the clarity of his mind and the gentleness of his wit. Nearly 40 years later, I can still hear him saying: ‘Isn’t it easier to do it right the first time than to explain why you didn’t?’ That line has stayed with me ever since. A simple comment from Stan during a seminar — often offered with a disarming smile — could puncture a weak argument or crystallize a central insight. He taught generations of macroeconomists to prize discipline, clarity, and policy relevance.”
Olivier Blanchard PhD ’77, the Robert M. Solow Professor of Economics Emeritus at MIT and another advisee, explains that Fischer “was one of the most popular teachers, and one of the most popular thesis advisers. We flocked to his office, and I suspect that the only time for research he had was during the night. What we admired most were his technical skills — he knew how to use stochastic calculus — and his ability to take on big questions and simplify them to the point where the answer, ex post, looked obvious. When Rudi Dornbusch joined him in 1975, macro and international quickly became the most exciting fields at MIT.” Within a decade of his joining the MIT faculty, “Stan had acquired near-guru status.”
Fischer built bridges between economic theory and the practice of economic policy. He served as chief economist of the World Bank (1988-90), first deputy managing director at the International Monetary Fund (IMF, 1994-2001), governor of the Bank of Israel (2005-13), and vice chair of the U.S. Federal Reserve (2014-17). These leadership roles gave him a rare platform to implement ideas he helped develop in the classroom and he was widely praised for his successes at averting financial crises across several decades and continents. Yet even as he moved through the highest circles of global policymaking, he remained a teacher at heart — accessible, thoughtful, and generous with his time.
At MIT, Fischer is best remembered for inspiring generations of graduate students who moved between academics and policy just as he did. Over the course of two decades before he began his active policy role, he was primary adviser for 49 PhD students, secondary adviser to another 23, and a celebrated teacher for many more.
Many of his students became important macroeconomic policymakers, including Ben Bernanke PhD ’79; Mario Draghi PhD ’77; Ilan Goldfajn PhD ’95; Philip Lowe PhD ’91; and Kazuo Ueda PhD ’80, who chaired the Federal Reserve Board, the European Central Bank, the Banco Central do Brazil, the Reserve Bank of Australia, and the Bank of Japan. Students Gregory Mankiw PhD ’84 and Christina Romer PhD ’85 chaired the Council of Economic Advisors; Maurice Obstfeld PhD ’79 and Kenneth Rogoff PhD ’80 were chief economist at the International Monetary Fund; and Frederic Mishkin PhD ’76 was a governor of the Federal Reserve. Another of his students, former Treasury Secretary Lawrence Summers ’75, explains that “no one had more cumulative influence on the macroeconomic policymakers of the last generation than Stanley Fischer … We all were shaped by his clarity of thought, intellectual balance, personal decency, and quality of character. In a broader sense, everyone who was involved in the macro policy enterprise was Stan Fischer’s disciple. People all over the world who never knew his name lived better, more secure, lives because of all that he did through his teaching, writing, and service.”
Fischer grew up in Northern Rhodesia (now Zambia), living behind the general store his family ran before moving to Southern Rhodesia (now Zimbabwe) at the age of 13. Inspired by the quality of writing in John Maynard Keynes’ “The General Theory of Employment, Interest, and Money,” he applied for and won a scholarship to study at the London School of Economics. He moved to MIT for his graduate studies, where his dissertation was supervised by Franklin M. Fisher. After several years on the University of Chicago faculty, he returned to MIT in 1973, where he stayed for the remainder of his academic career. He held the Elizabeth and James Killian Class of 1926 professorship from 1992 to 1995, serving as department chair in 1993–94, before being called away to the IMF.
Fischer’s intellectual journey from MIT to Chicago and back culminated in his most influential academic work. Ivan Werning, the Robert M. Solow Professor of Economics at MIT notes, “his research was pathbreaking and paved the way to the modern approach to macroeconomics. By merging nominal rigidities associated with MIT’s Keynesian tradition with rational expectations emanating from the Chicago school, his 1977 paper on ‘Long-Term Contracts, Rational Expectations, and the Optimal Money Supply Rule’ showed how the non-neutrality of money did not require agent irrationality or confusion.” The dynamic stochastic general equilibrium models now used at every central bank to evaluate monetary policy options are direct descendants of Fischer’s thinking.
Fischer’s influence goes beyond what has become known as New Keynesian Economics. Werning continues, “Fischer’s research combined theoretical insights to very applied questions. His textbook with Blanchard was instrumental to an entire generation of macroeconomists, showing macroeconomics as a rich and evolving field, ripe with tools and great questions to study. Along with Bob Solow, Rudi Dornbusch, and others, Fischer had a huge impact within the MIT economics department and helped build its day-to-day culture, with an inquisitive, open-minded, and friendly atmosphere.”
Macroeconomics — and MIT — owe him a profound debt.
Fischer is survived by his three sons, Michael, David, and Jonathan, and nine grandchildren.
Stanley Fischer built bridges between economic theory and the practice of economic policy. He served as chief economist of the World Bank (1988-90), first deputy managing director at the International Monetary Fund (1994-2001), governor of the Bank of Israel (2005-13), and vice chair of the U.S. Federal Reserve (2014-17).
Campus & Community
Community connections
Sophia Scott.Niles Singer/Harvard Staff Photographer
Nikki Rojas
Harvard Staff Writer
June 3, 2025
3 min read
67 grads recognized for Civic Engagement
Every week, Sophia Scott ’25 travels from Cambridge to the Suffolk County Jail in Boston to teach a high school equivalency class to inmates hoping to get their diplomas. It’s a trip she has mad
Every week, Sophia Scott ’25 travels from Cambridge to the Suffolk County Jail in Boston to teach a high school equivalency class to inmates hoping to get their diplomas. It’s a trip she has made since her sophomore year.
The work, which also includes conducting workshops and teaching public health, enabled the human evolutionary biology concentrator to be one of 67 College seniors to graduate on May 29 with a Civic Engagement certificate. Students in the program must complete three classes — including an engaged scholarship course — that provide rigorous perspectives on social issues, a 300-hour practicum, and a capstone retreat.
“This program is very intentionally designed to help students think about their path through Harvard and how to make the most of the College experience,” said Travis Lovett, assistant dean of civic engagement and service. “If you’re civically minded, it helps to reinforce that mission. The academic experience is all rooted in social issues. It has a very strong connection between theory and practice, and that’s incredibly valuable.”
Scott, a Kirkland House resident, said the certificate helped her connect with other peers interested in public service.
“For me, civic engagement means caring deeply about the communities that are not just around you, but the communities that are most in need and thinking about what you can do as an individual and as a collective to improve the experiences of those people,” she said.
Jana Amin.
Stephanie Mitchell/Harvard Staff Photographer
Jana Amin ’25, who was also in the first College cohort to graduate with the certificate, worked in the Emerging Leaders Program at the Harvard Radcliffe Institute as a mentor to high school students. Each year, Amin has been paired with two students to help develop their leadership skills and support a social change project they can implement in their communities.
“The cool part about the program is that it allows me to grow into being a leader, but it also helps me explore what it means to be a mentor,” the Near Eastern languages and civilizations concentrator said. “It’s a very specific kind of role that requires you to listen, to learn, and to be a resource for the students you’re paired with.”
Amin opted to take “Oral Histories with History & Literature” with lecturer Lilly Havstad as one of the three courses required for the certificate. The Mather House resident was able to connect the class, which explored the ethics of conducting oral histories, to her senior thesis on Palestinian women in Cairo.
Gavin Lindsey.
Veasey Conway/Harvard Staff Photographer
Gavin Lindsey ’25 took his final required course in the spring, choosing Elizabeth City’s GenEd offering “U.S. K-12 Schools: Assumptions, Binaries, and Controversies.” The double concentrator in environmental science and public policy and economics plans to use what he learned in the classroom when he begins working with Teach for America.
“The course has been stellar in teaching us what kind of disparities are present in teaching,” noted Lindsey, who ultimately envisions himself going into public service in the housing or economic development sector.
Anthony Miguel.
Anthony Miguel ’25 believes that civic engagement means working directly with his community in Santa María Tataltepec in Oaxaca, Mexico. “Oftentimes public service is overlooked or even undervalued in higher education,” Miguel said. “I feel like this is a next step to highlighting the importance of engaging with your community and bridging that gap between academia and that community.”
The double concentrator in computer science and molecular and cellular biology hopes to return to Mexico to address health concerns such as diabetes, which is the second leading cause of death among Mexican adults.
Science & Tech
‘We’re still standing … We can still do important work’
Sy Boles
Harvard Staff Writer
June 3, 2025
5 min read
Climate researchers wrestling with losses of federal funding, data, and key tools
Recent federal funding cuts are hitting climate research hard. The losses will hamper progress, Harvard environmental scientists say, but they won’t stop it.
The Trump adminis
‘We’re still standing … We can still do important work’
Sy Boles
Harvard Staff Writer
5 min read
Climate researchers wrestling with losses of federal funding, data, and key tools
Recent federal funding cuts are hitting climate research hard. The losses will hamper progress, Harvard environmental scientists say, but they won’t stop it.
The Trump administration has cancelled millions in research grants, shuttered key climate monitoring programs, and made deep cuts to federal agencies that support environmental science. Researchers warn that the cuts could undermine America’s ability to respond to accelerating climate change.
“We’re caught in the crosshairs,” said Peter Huybers, chair of the Department of Earth and Planetary Sciences (EPS). But, he said, “We’re still standing. We still have fantastic people who are here. We have all these things to do, and we can still do important work.”
Environmental research was largely spared from the first round of federal funding cuts, which struck Harvard in early April and led to the University filing suit, arguing the government’s actions violate federal law and the University’s First Amendment rights.
Then a number of EPS researchers learned on May 15 that their grants, many of them through the National Science Foundation, had been targeted under a new round of cuts.
Elsie Sunderland, the Fred Kavli Professor of Environmental Chemistry and Professor of Earth and Planetary Sciences, lost a grant worth $578,882 to study permafrost melt in Arctic ecosystems and its impact on the global mercury budget. Amid the waves of funding halts, she said it was hard to know what would happen next.
“We don’t know what the landscape is and what future funding pipelines look like. A lot of the world right now is just pausing to see where we end up, and how long we have to brace ourselves,” she said.
David Johnston, whose lab studies the relationship between microorganisms and Earth surface evolution, had a grant from the NSF formally terminated, funds withheld from NASA, and one of his students lost an NSF Graduate Fellowship.
“The work of our entire team, including research staff, students, etc., will come to a near standstill. The future of our capacity to do science and make fundamental discoveries is highly uncertain,” said Johnston, who is also a professor of Earth and planetary sciences and co-director of graduate students.
Mary Rice is director of the Harvard Chan School of Public Health’s Center for Climate, Health, and the Global Environment (C-CHANGE), which studies potential solutions to the health consequences of climate change and fossil fuel pollution.
She lost a major National Institutes of Health-funded Center grant that she co-leads to address heat threats in vulnerable communities around the world, including a project to provide cooling units to older adults in Boston who are at risk of heat stress.
She also lost $500,000 in NIH funding for the final year of a randomized controlled trial testing the impact of clean air filters in the homes of people who have serious lung disease. Rice is determined to find a way to continue the work.
Stopping the trial early would be “a disservice, not only to the people who dedicated their time and their tissue, and went through all of this inconvenience to be part of the study, but it’s also a disservice to the American public and to the taxpayers who paid for the research, to the many people with COPD who could have benefited from the findings of the study,” she said.
But it’s not just the loss of grant funding. Researchers in climate science say they are also concerned about the loss of federal data and federally run analytical tools climate researchers rely on.
“The central missions and much of the important work of these agencies are being dismantled,” said Sunderland, listing the Environmental Protection Agency, the U.S. Geological Survey, and the National Oceanic and Atmospheric Administration. “This affects all of those collaborators, the data infrastructure, all of the work that they do. It’s a mess.”
Rice was alarmed by the Trump administration’s ending of EJScreen, an EPA-based mapping tool that helped researchers monitor parts of the country where vulnerable people were exposed to high levels of pollution.
Graduate student Alex Berry was developing novel methods to monitor food security via satellite imagery in southern Madagascar, where drought has left about 1.6 million people dependent on food aid. Her research involved comparing satellite images of farmland with live reports from the ground. But the on-the-ground data collection was run by Catholic Relief Services, which was funded by the now largely dismantled USAID.
“We’re no longer getting that live data,” Berry said. “The Madagascar project has been completely shut down.” Plans to expand the project to Ethiopia and Madagascar have been scaled back and may not proceed at all.
Global surface temperatures in 2024 were 2.3 degrees Fahrenheit above the 20th-century baseline, topping the previous record set in 2023. It’s an unprecedented heat streak; in some parts of the world, it’s regularly stretching the upper limit of what the human body can tolerate.
EPS head Huybers, who researches how climate change affects agricultural output around the world, said his department is pivoting to preserve critical work. He got as much data on global food insecurity as he could from USAID and is carrying forward some of their work.
“We are putting together a global database of agricultural yields at the subnational level and plan to release a publicly available version soon,” he said.
Despite the many setbacks, EPS researchers say they remain committed to the work.
“I think it is our responsibility right now to document the harm that is being imposed, both on the environment locally and globally, and then on public health,” Sunderland said. “I don’t actually feel depressed about this. I feel like we have an obligation to speak out and talk about it, and then really keep doing our work.”
CNA, 28 May 20258world Online, 28 May 2025Money 89.3FM, 28 May 2025Warna 94.2FM, 28 May 2025The Straits Times, 29 May 2025, Singapore, pA18Lianhe Zaobao, Singapore, 29 May 2025, p8Tamil Murasu, 30 May 2025, p2
The technology of ETH Zurich spin-off MESH allows for new shapes and was used for the Tor Alva in Mulegns, among others. And it makes construction more efficient, for example in prefabrication for the second Gotthard road tunnel.
The technology of ETH Zurich spin-off MESH allows for new shapes and was used for the Tor Alva in Mulegns, among others. And it makes construction more efficient, for example in prefabrication for the second Gotthard road tunnel.
Every tumour is unique. This makes it difficult to find the most effective therapy for treatment. Researchers in Zurich and Basel are now showing how state-of-the-art molecular biological technologies can be used to create a detailed tumour profile within four weeks, enabling tailored treatment. The study is the first of its kind in the world.
Every tumour is unique. This makes it difficult to find the most effective therapy for treatment. Researchers in Zurich and Basel are now showing how state-of-the-art molecular biological technologies can be used to create a detailed tumour profile within four weeks, enabling tailored treatment. The study is the first of its kind in the world.
Hydrogen has the potential to be a climate-friendly fuel since it doesn’t release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a “green” fuel over its entire life cycle.A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans
Hydrogen has the potential to be a climate-friendly fuel since it doesn’t release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a “green” fuel over its entire life cycle.
A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.
Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.
Now, the researchers have carried out a “cradle-to-grave” life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.
In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.
The low-carbon footprint is on par with other proposed “green hydrogen” technologies, such as those powered by solar and wind energy.
“We’re in the ballpark of green hydrogen,” says lead author Aly Kombargi PhD ’25, who graduated this spring from MIT with a doctorate in mechanical engineering. “This work highlights aluminum’s potential as a clean energy source and offers a scalable pathway for low-emission hydrogen deployment in transportation and remote energy systems.”
The study’s MIT co-authors are Brooke Bao, Enoch Ellis, and professor of mechanical engineering Douglas Hart.
Gas bubble
Dropping an aluminum can in water won’t normally cause much of a chemical reaction. That’s because when aluminum is exposed to oxygen, it instantly forms a shield-like layer. Without this layer, aluminum exists in its pure form and can readily react when mixed with water. The reaction that occurs involves aluminum atoms that efficiently break up molecules of water, producing aluminum oxide and pure hydrogen. And it doesn’t take much of the metal to bubble up a significant amount of the gas.
“One of the main benefits of using aluminum is the energy density per unit volume,” Kombargi says. “With a very small amount of aluminum fuel, you can conceivably supply much of the power for a hydrogen-fueled vehicle.”
Last year, he and Hart developed a recipe for aluminum-based hydrogen production. They found they could puncture aluminum’s natural shield by treating it with a small amount of gallium-indium, which is a rare-metal alloy that effectively scrubs aluminum into its pure form. The researchers then mixed pellets of pure aluminum with seawater and observed that the reaction produced pure hydrogen. What’s more, the salt in the water helped to precipitate gallium-indium, which the team could subsequently recover and reuse to generate more hydrogen, in a cost-saving, sustainable cycle.
“We were explaining the science of this process in conferences, and the questions we would get were, ‘How much does this cost?’ and, ‘What’s its carbon footprint?’” Kombargi says. “So we wanted to look at the process in a comprehensive way.”
A sustainable cycle
For their new study, Kombargi and his colleagues carried out a life cycle assessment to estimate the environmental impact of aluminum-based hydrogen production, at every step of the process, from sourcing the aluminum to transporting the hydrogen after production. They set out to calculate the amount of carbon associated with generating 1 kilogram of hydrogen — an amount that they chose as a practical, consumer-level illustration.
“With a hydrogen fuel cell car using 1 kilogram of hydrogen, you can go between 60 to 100 kilometers, depending on the efficiency of the fuel cell,” Kombargi notes.
They performed the analysis using Earthster — an online life cycle assessment tool that draws data from a large repository of products and processes and their associated carbon emissions. The team considered a number of scenarios to produce hydrogen using aluminum, from starting with “primary” aluminum mined from the Earth, versus “secondary” aluminum that is recycled from soda cans and other products, and using various methods to transport the aluminum and hydrogen.
After running life cycle assessments for about a dozen scenarios, the team identified one scenario with the lowest carbon footprint. This scenario centers on recycled aluminum — a source that saves a significant amount of emissions compared with mining aluminum — and seawater — a natural resource that also saves money by recovering gallium-indium. They found that this scenario, from start to finish, would generate about 1.45 kilograms of carbon dioxide for every kilogram of hydrogen produced. The cost of the fuel produced, they calculated, would be about $9 per kilogram, which is comparable to the price of hydrogen that would be generated with other green technologies such as wind and solar energy.
The researchers envision that if the low-carbon process were ramped up to a commercial scale, it would look something like this: The production chain would start with scrap aluminum sourced from a recycling center. The aluminum would be shredded into pellets and treated with gallium-indium. Then, drivers could transport the pretreated pellets as aluminum “fuel,” rather than directly transporting hydrogen, which is potentially volatile. The pellets would be transported to a fuel station that ideally would be situated near a source of seawater, which could then be mixed with the aluminum, on demand, to produce hydrogen. A consumer could then directly pump the gas into a car with either an internal combustion engine or a fuel cell.
The entire process does produce an aluminum-based byproduct, boehmite, which is a mineral that is commonly used in fabricating semiconductors, electronic elements, and a number of industrial products. Kombargi says that if this byproduct were recovered after hydrogen production, it could be sold to manufacturers, further bringing down the cost of the process as a whole.
“There are a lot of things to consider,” Kombargi says. “But the process works, which is the most exciting part. And we show that it can be environmentally sustainable.”
The group is continuing to develop the process. They recently designed a small reactor, about the size of a water bottle, that takes in aluminum pellets and seawater to generate hydrogen, enough to power an electric bike for several hours. They previously demonstrated that the process can produce enough hydrogen to fuel a small car. The team is also exploring underwater applications, and are designing a hydrogen reactor that would take in surrounding seawater to power a small boat or underwater vehicle.
This research was supported, in part, by the MIT Portugal Program.
MIT engineers have developed a new aluminum-based process to produce hydrogen gas, that they are testing on a variety of applications, including an aluminum-powered electric vehicle, pictured here.
Ammon Love, Alex Norbrook and Carolina Pardo have been named Udall Scholars, which is awarded to students “committed to careers in the environment, Tribal public policy or Native health care.”
Ammon Love, Alex Norbrook and Carolina Pardo have been named Udall Scholars, which is awarded to students “committed to careers in the environment, Tribal public policy or Native health care.”
Hearing aids, mouth guards, dental implants, and other highly tailored structures are often products of 3D printing. These structures are typically made via vat photopolymerization — a form of 3D printing that uses patterns of light to shape and solidify a resin, one layer at a time.The process also involves printing structural supports from the same material to hold the product in place as it’s printed. Once a product is fully formed, the supports are removed manually and typically thrown out a
Hearing aids, mouth guards, dental implants, and other highly tailored structures are often products of 3D printing. These structures are typically made via vat photopolymerization — a form of 3D printing that uses patterns of light to shape and solidify a resin, one layer at a time.
The process also involves printing structural supports from the same material to hold the product in place as it’s printed. Once a product is fully formed, the supports are removed manually and typically thrown out as unusable waste.
MIT engineers have found a way to bypass this last finishing step, in a way that could significantly speed up the 3D-printing process. They developed a resin that turns into two different kinds of solids, depending on the type of light that shines on it: Ultraviolet light cures the resin into an highly resilient solid, while visible light turns the same resin into a solid that is easily dissolvable in certain solvents.
The team exposed the new resin simultaneously to patterns of UV light to form a sturdy structure, as well as patterns of visible light to form the structure’s supports. Instead of having to carefully break away the supports, they simply dipped the printed material into solution that dissolved the supports away, revealing the sturdy, UV-printed part.
The supports can dissolve in a variety of food-safe solutions, including baby oil. Interestingly, the supports could even dissolve in the main liquid ingredient of the original resin, like a cube of ice in water. This means that the material used to print structural supports could be continuously recycled: Once a printed structure’s supporting material dissolves, that mixture can be blended directly back into fresh resin and used to print the next set of parts — along with their dissolvable supports.
The researchers applied the new method to print complex structures, including functional gear trains and intricate lattices.
“You can now print — in a single print — multipart, functional assemblies with moving or interlocking parts, and you can basically wash away the supports,” says graduate student Nicholas Diaco. “Instead of throwing out this material, you can recycle it on site and generate a lot less waste. That’s the ultimate hope.”
He and his colleagues report the details of the new method in a paper appearing today in Advanced Materials Technologies. The MIT study’s co-authors include Carl Thrasher, Max Hughes, Kevin Zhou, Michael Durso, Saechow Yap, Professor Robert Macfarlane, and Professor A. John Hart, head of MIT’s Department of Mechanical Engineering.
Waste removal
Conventional vat photopolymerization (VP) begins with a 3D computer model of a structure to be printed — for instance, of two interlocking gears. Along with the gears themselves, the model includes small support structures around, under, and between the gears to keep every feature in place as the part is printed. This computer model is then sliced into many digital layers that are sent to a VP printer for printing.
A standard VP printer includes a small vat of liquid resin that sits over a light source. Each slice of the model is translated into a matching pattern of light that is projected onto the liquid resin, which solidifies into the same pattern. Layer by layer, a solid, light-printed version of the model’s gears and supports forms on the build platform. When printing is finished, the platform lifts the completed part above the resin bath. Once excess resin is washed away, a person can go in by hand to remove the intermediary supports, usually by clipping and filing, and the support material is ultimately thrown away.
“For the most part, these supports end up generating a lot of waste,” Diaco says.
Print and dip
Diaco and the team looked for a way to simplify and speed up the removal of printed supports and, ideally, recycle them in the process. They came up with a general concept for a resin that, depending on the type of light that it is exposed to, can take on one of two phases: a resilient phase that would form the desired 3D structure and a secondary phase that would function as a supporting material but also be easily dissolved away.
After working out some chemistry, the team found they could make such a two-phase resin by mixing two commercially available monomers, the chemical building blocks that are found in many types of plastic. When ultraviolet light shines on the mixture, the monomers link together into a tightly interconnected network, forming a tough solid that resists dissolution. When the same mixture is exposed to visible light, the same monomers still cure, but at the molecular scale the resulting monomer strands remain separate from one another. This solid can quickly dissolve when placed in certain solutions.
In benchtop tests with small vials of the new resin, the researchers found the material did transform into both the insoluble and soluble forms in response to ultraviolet and visible light, respectively. But when they moved to a 3D printer with LEDs dimmer than the benchtop setup, the UV-cured material fell apart in solution. The weaker light only partially linked the monomer strands, leaving them too loosely tangled to hold the structure together.
Diaco and his colleagues found that adding a small amount of a third “bridging” monomer could link the two original monomers together under UV light, knitting them into a much sturdier framework. This fix enabled the researchers to simultaneously print resilient 3D structures and dissolvable supports using timed pulses of UV and visible light in one run.
The team applied the new method to print a variety of intricate structures, including interlocking gears, intricate lattices, a ball within a square frame, and, for fun, a small dinosaur encased in an egg-shaped support that dissolved away when dipped in solution.
“With all these structures, you need a lattice of supports inside and out while printing,” Diaco says. “Removing those supports normally requires careful, manual removal. This shows we can print multipart assemblies with a lot of moving parts, and detailed, personalized products like hearing aids and dental implants, in a way that’s fast and sustainable.”
“We’ll continue studying the limits of this process, and we want to develop additional resins with this wavelength-selective behavior and mechanical properties necessary for durable products,” says professor of mechanical engineering John Hart. “Along with automated part handling and closed-loop reuse of the dissolved resin, this is an exciting path to resource-efficient and cost-effective polymer 3D printing at scale.”
This research was supported, in part, by the Center for Perceptual and Interactive Intelligence (InnoHK) in Hong Kong, the U.S. National Science Foundation, the U.S. Office of Naval Research, and the U.S. Army Research Office.
Researchers have developed a resin that turns into two different kinds of solids, depending on the type of light that shines on it: Ultraviolet light cures the resin into a highly resilient solid, while visible light turns the same resin into a solid that is easily dissolvable in certain solvents.
The intersection of art, science, and technology presents a unique, sometimes challenging, viewpoint for both scientists and artists. It is in this nexus that art historian Lindsay Caplan positions herself: “My work as an art historian focuses on the ways that artists across the 20th century engage with new technologies like computers, video, and television, not merely as new materials for making art as they already understand it, but as conceptual platforms for reorienting and reimagining the f
The intersection of art, science, and technology presents a unique, sometimes challenging, viewpoint for both scientists and artists. It is in this nexus that art historian Lindsay Caplan positions herself: “My work as an art historian focuses on the ways that artists across the 20th century engage with new technologies like computers, video, and television, not merely as new materials for making art as they already understand it, but as conceptual platforms for reorienting and reimagining the foundational assumptions of their practice.”
With this introduction, Caplan, an assistant professor at Brown University, opened the inaugural Resonances Lecture — a new series by STUDIO.nano to explore the generative edge where art, science, and technology meet. Delivered on April 28 to an interdisciplinary crowd at MIT.nano, Caplan’s lecture, titled “Analogical Engines — Collaborations across Art and Technology in the 1960s,” traced how artists across Europe and the Americas in the 1960s engaged with and responded to the emerging technological advances of computer science, cybernetics, and early AI. “By the time we reached the 1960s,” she said, “analogies between humans and machines, drawn from computer science and fields like information theory and cybernetics, abound among art historians and artists alike.”
Caplan’s talk centered on two artistic networks, with a particular emphasis on American artist Liliane Lijn: New Tendencies exhibitions (1961-79) and the Signals gallery in London (1964-66). She deftly analyzed the artist’s material experimentation with contemporary advances in emergent technologies — quantum physics and mathematical formalism, particularly Heisenberg's uncertainty principle. She argued that both art historical formalism and mathematical formalism share struggles with representation, indeterminacy, and the tension between constructed and essential truths.
Following her talk, Caplan was joined by MIT faculty Mark Jarzombek, professor of the history and theory of architecture, and Gediminas Urbonas, associate professor of art, culture, and technology (ACT), for a panel discussion moderated by Ardalan SadeghiKivi MArch ’23, lecturer of comparative media studies. The conversation expanded on Caplan’s themes with discussions of artists’ attraction to newly developed materials and technology, and the critical dimension of reimagining and repurposing technologies that were originally designed with an entirely different purpose.
Urbonas echoed the urgency of these conversations. “It is exceptionally exciting to witness artists working in dialectical tension with scientists — a tradition that traces back to the founding of the Center for Advanced Visual Studies at MIT and continues at ACT today,” reflected Urbonas. “The dual ontology of science and art enables us to grasp the world as a web of becoming, where new materials, social imaginaries, and aesthetic values are co-constituted through interdisciplinary inquiry. Such collaborations are urgent today, offering tools to reimagine agency, subjectivity, and the role of culture in shaping the future.”
The event concluded with a reception in MIT.nano’s East Lobby, where attendees could view MIT ACT student projects currently on exhibition in MIT.nano’s gallery spaces. The reception was, itself, an intersection of art and technology. “The first lecture of the Resonances Lecture Series lived up to the title,” reflects Jarzombek. “A brilliant talk by Lindsay Caplan proved that the historical and aesthetical dimensions in the sciences have just as much relevance to a critical posture as the technical.”
The Resonances lecture and panel series seeks to gather artists, designers, scientists, engineers, and historians who examine how scientific endeavors shape artistic production, and vice versa. Their insights expose the historical context on how art and science are made and distributed in society and offer hints at the possible futures of such productions.
“When we were considering who to invite to launch this lecture series, Lindsay Caplan immediately came to mind,” says Tobias Putrih, ACT lecturer and academic advisor for STUDIO.nano. “She is one of the most exciting thinkers and historians writing about the intersection between art, technology, and science today. We hope her insights and ideas will encourage further collaborative projects.”
The Resonances series is one of several new activities organized by STUDIO,nano, a program within MIT.nano, to connect the arts with cutting-edge research environments. “MIT.nano generates extraordinary scientific work,” says Samantha Farrell, manager of STUDIO.nano, “but it’s just as vital to create space for cultural reflection. STUDIO.nano invites artists to engage directly with new technologies — and with the questions they raise.”
In addition to the Resonances lectures, STUDIO.nano organizes exhibitions in the public spaces at MIT.nano, and an Encounters series, launched last fall, to bring artists to MIT.nano. To learn about current installations and ongoing collaborations, visit the STUDIO.nano web page.
STUDIO.nano, an initiative by MIT.nano, hosted its first Resonances Lecture. The talk, “Analogical Engines — Collaborations across Art and Technology in the 1960s,” was delivered by Brown University Assistant Professor Lindsay Caplan (second from left). Afterward, she participated in a panel discussion with Gediminas Urbonas, associate professor of art, culture, and technology at MIT (second from right), and Mark Jarzombek, professor of the history and theory of architecture at MIT (right). The panel was moderated by Ardalan SadeghiKivi, lecturer of comparative media studies at MIT (left).
For weeks, the whiteboard in the lab was crowded with scribbles, diagrams, and chemical formulas. A research team across the Olivetti Group and the MIT Concrete Sustainability Hub (CSHub) was working intensely on a key problem: How can we reduce the amount of cement in concrete to save on costs and emissions? The question was certainly not new; materials like fly ash, a byproduct of coal production, and slag, a byproduct of steelmaking, have long been used to replace some of the cement in concre
For weeks, the whiteboard in the lab was crowded with scribbles, diagrams, and chemical formulas. A research team across the Olivetti Group and the MIT Concrete Sustainability Hub (CSHub) was working intensely on a key problem: How can we reduce the amount of cement in concrete to save on costs and emissions?
The question was certainly not new; materials like fly ash, a byproduct of coal production, and slag, a byproduct of steelmaking, have long been used to replace some of the cement in concrete mixes. However, the demand for these products is outpacing supply as industry looks to reduce its climate impacts by expanding their use, making the search for alternatives urgent. The challenge that the team discovered wasn’t a lack of candidates; the problem was that there were too many to sort through.
On May 17, the team, led by postdoc Soroush Mahjoubi, published an open-access paper in Nature’s Communications Materials outlining their solution. “We realized that AI was the key to moving forward,” notes Mahjoubi. “There is so much data out there on potential materials — hundreds of thousands of pages of scientific literature. Sorting through them would have taken many lifetimes of work, by which time more materials would have been discovered!”
With large language models, like the chatbots many of us use daily, the team built a machine-learning framework that evaluates and sorts candidate materials based on their physical and chemical properties.
“First, there is hydraulic reactivity. The reason that concrete is strong is that cement — the ‘glue’ that holds it together — hardens when exposed to water. So, if we replace this glue, we need to make sure the substitute reacts similarly,” explains Mahjoubi. “Second, there is pozzolanicity. This is when a material reacts with calcium hydroxide, a byproduct created when cement meets water, to make the concrete harder and stronger over time. We need to balance the hydraulic and pozzolanic materials in the mix so the concrete performs at its best.”
Analyzing scientific literature and over 1 million rock samples, the team used the framework to sort candidate materials into 19 types, ranging from biomass to mining byproducts to demolished construction materials. Mahjoubi and his team found that suitable materials were available globally — and, more impressively, many could be incorporated into concrete mixes just by grinding them. This means it’s possible to extract emissions and cost savings without much additional processing.
“Some of the most interesting materials that could replace a portion of cement are ceramics,” notes Mahjoubi. “Old tiles, bricks, pottery — all these materials may have high reactivity. That’s something we’ve observed in ancient Roman concrete, where ceramics were added to help waterproof structures. I’ve had many interesting conversations on this with Professor Admir Masic, who leads a lot of the ancient concrete studies here at MIT.”
The potential of everyday materials like ceramics and industrial materials like mine tailings is an example of how materials like concrete can help enable a circular economy. By identifying and repurposing materials that would otherwise end up in landfills, researchers and industry can help to give these materials a second life as part of our buildings and infrastructure.
Looking ahead, the research team is planning to upgrade the framework to be capable of assessing even more materials, while experimentally validating some of the best candidates. “AI tools have gotten this research far in a short time, and we are excited to see how the latest developments in large language models enable the next steps,” says Professor Elsa Olivetti, senior author on the work and member of the MIT Department of Materials Science and Engineering. She serves as an MIT Climate Project mission director, a CSHub principal investigator, and the leader of the Olivetti Group.
“Concrete is the backbone of the built environment,” says Randolph Kirchain, co-author and CSHub director. “By applying data science and AI tools to material design, we hope to support industry efforts to build more sustainably, without compromising on strength, safety, or durability.
In addition to Mahjoubi, Olivetti, and Kirchain, co-authors on the work include MIT postdoc Vineeth Venugopal, Ipek Bensu Manav SM ’21, PhD ’24; and CSHub Deputy Director Hessam AzariJafari.
This research was conducted through the MIT Concrete Sustainability Hub, which is supported by the Concrete Advancement Foundation. This work also received funding from the MIT-IBM Watson AI Lab.
A team led by Soroush Mahjoubi, a postdoc in civil and environmental engineering, built a machine-learning framework that evaluates and sorts candidate materials for cleaner concrete based on their physical and chemical properties. “Some of the most interesting materials that could replace a portion of cement are ceramics,” notes Mahjoubi. “Old tiles, bricks, pottery — all these materials may have high reactivity.”
Illustrations by Liz Zonarich/Harvard Staff
Arts & Culture
What good is writing anyway?
Scholars across range of disciplines weigh in on value of the activity amid rise of generative AI systems
Liz Mineo
Harvard Staff Writer
June 2, 2025
9 min read
What do students stand to lose if they no longer have to write?
Since the arrival of ChatGPT in 2022, many students have turned to AI
Scholars across range of disciplines weigh in on value of the activity amid rise of generative AI systems
Liz Mineo
Harvard Staff Writer
9 min read
What do students stand to lose if they no longer have to write?
Since the arrival of ChatGPT in 2022, many students have turned to AI for help writing papers, and use is expected to grow as students become more adept with it. The shift raised immediate concerns among educators about academic integrity and broader ones about what it could mean to intellectual and cognitive development.
The Gazette spoke with faculty across a range of disciplines, including a computer scientist, a philosopher, a neurologist, and two cognitive scientists, among others, to ask them about what may lie ahead. We asked the same question to GPT-4, which describes itself as “OpenAI’s most advanced system that produces safer and more useful responses.” The interviews have been edited for length and clarity.
Alice Flaherty.
Stephanie Mitchell/Harvard Staff Photographer
Neurologist Alice Flaherty, M.D., Ph.D., Associate Professor of Neurology and Associate Professor of Psychiatry at Harvard Medical School
We lose abilities whenever we farm out tasks to other people or machines. Because our brains have limited real estate, they actively lose facts and skills we no longer use, to make room for new facts and skills.
The London cabbie studies are a classic example of competition for resources. As they gained knowledge of the London road map, their posterior hippocampus physically swelled. But their anterior hippocampus shrank — and they were worse at visual recognition tasks than people without elaborate mental maps were.
The question is: When AI frees up all the neurons that currently are busy at finding the right adjective or trope, what new skills will AI make possible? We can’t predict that. If AI becomes able to do everything we can, we can’t even predict whether there will be any new skills left for us to acquire.
Leslie Valiant.
Harvard file photo
Computer scientist Leslie Valiant, T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics
Education will be rethought in the coming years. AI will be the impetus for this happening, but not the fundamental reason for its necessity. The fundamental reason, in my view, is that up till now we have not recognized human educability as being a phenomenon fundamentally worth studying and understanding. Instead, education has been practiced as a best-practice endeavor with few intellectual underpinnings.
Where does AI come in? With AI one can emulate any human cognitive phenomenon that one can define and that one puts enough effort toward. This view was presciently articulated by Alan Turing already in 1951. Recent evidence suggests that it will not be productive to dispute it.
In consequence, education should not be conceived in terms of competition between humans and machines. The primary question for educators is: What can and should education achieve for humans? Essays may or may not be part of the answer. Further, the challenges of evaluating student work when AI is available should not obscure the question.
Susanna Siegel.
Harvard file photo
Philosopher Susanna Siegel, Edgard Pierce Professor of Philosophy
If you think of a paper as an answer to a question, you might think the sole point of writing the paper is to answer the question. If a machine can answer the question just as well or better than a student can, isn’t it more efficient to use the machine’s answer?
This model of the situation overlooks the benefits to the student of the process of inquiry. As a finished product, a paper rehearsed a line of thought, sometimes a route to an answer to a question.
But the product leaves invisible the tangled lines of inquiry, the false but interesting starts, the productive, beautiful chaos of thinking and writing made of things that at one point seemed relevant but turned out not to be. Those things are a mix of sand and gold. Amidst this chaos is epistemic progress.
Finally, when we outsource the task of finding words to express things, we lose all the kinds of mental activation that come from those searches. Words and ideas are laden with associations, reminders, emotional charges — a whole forest of connections to our past and future tasks, ideas, and relationships.
There is also a communicative value to the process of writing from scratch; when you guide someone on a line of thought with all its tangles and dead ends and reorientations, you build their understanding.
Tomer Ullman.
Harvard file photo
Cognitive scientistTomer D. Ullman, assistant professor of psychology
Suppose a wormhole opened up tomorrow and out stepped an alien intelligence nicknamed Bunfo. Bunfo seems dumb at first, but the more we talk to it, the smarter it gets. And all it seems to want is to answer whatever you ask. You can ask it, “What is ‘I love you’ in Italian?” and it’ll say, “Ti amo.” You can ask how to make a pizza, and it’ll give you a recipe.
Before long, industries form to hook up anything from phones to planes to Bunfo, as academics hotly debate how Bunfo thinks and what it wants. And before long, students are asking Bunfo to write their essays for them. What do the students lose? They lose whatever it is writing essays is meant to teach.
So, sure, if you want to order a pizza in Rome, it makes total sense to show a menu to Bunfo and ask, “What is this in English?” But if you’re signed up for a class because you yourself want to learn Italian, and all you do is have Bunfo answer things for you, then what are you even doing there?
Talia Konkle.
Photo courtesy of Talia Konkle
Visual cognitive computational neuroscientistTalia Konkle, professor of psychology
Some of the relevant cognitive science to draw on is related to active versus passive learning. Active learning — generating content yourself — helps retention. For example, the well-known “testing effect” is a powerful result in the science of learning; the act of testing, of trying to generate answers, helps retention.
A simple example is learning vocabulary. If you read the word and definition together, you won’t learn vocabulary as well as if you see the word alone, have a pause, try to guess, and then flip the flashcard over to see the definition afterwards.
If we use AI to help us with active learning, to me there are huge benefits. But if we’re using it to shortcut our thinking on the skills we’re trying to internalize, then that is likely counterproductive.
I know in advance when I’m doing the kind of work where I’m gaining new knowledge, where it is important to me that these gains get internalized into my own brain. This awareness influences the AI strategies I use to approach this learning. And I’m aware of other times when the work I need to produce is less related to my learning goals, where I’m happy to use AI’s synthesizing summarizing ability for work efficiency.
This is where I think it’s ultimately about the student’s learning goals when writing an essay. We as professors can communicate what we hope they get out of the writing process, and what we think would count as success. But, as with any assignment, the student can engage more or less actively with that process. AI may make it easier or more tempting to be passive learners, but I do think AI can equally make us more efficient active learners as well.
Joshua Greene.
Photo by Dylan Goodman
Experimental psychologist, neuroscientist and philosopher Joshua Greene, professor of psychology
When students let AI do the thinking for them, they miss out on learning how to think better. What these students lose today is what past students lost by enlisting a friend, parent, or hired gun to do the work. What’s different today, however, is that the substitute intelligence is expected to be freely available for the indefinite future.
I believe that it’s still important for people to learn how to think well, but as the machines get increasingly capable, many will ask: Why bother? My answer: Even if the economic demand for human cognitive labor is dramatically diminished, we still want wise humans in charge.
This requires that at least some humans — and ideally all humans — maintain the intellectual skills needed to envision and realize a good future. In the decades to come, the need for clear thinking may be less technocratic and more democratic.
Mary Lewis.
Photo by Dylan Goodman
Historian Mary Lewis, Robert Walton Goelet Professor of French History
The objective of writing is not just the product but also the process. Students who exclusively use AI to write their essays lose the process of thinking. And that is, after all, what we’re trying to do: produce new knowledge. If the goal is thinking or producing new knowledge, then a human being is the only one who can do that.
By its very nature, AI is rehashing things that are already out there. How many of us have had the experience of when we’re texting someone, and AI tries to predict what we’re going to say next, and gets it wrong? Why is it doing that? Because it is drawing from what it thinks is the most likely thing that we’re going to say. So, in a paper, if AI is using predictive text, it’s simply going to regurgitate ideas that are out there instead of creating new ones.
The more information that is fed into AI, the more AI has to work with. As this happens, it may look like it’s reflecting, but in fact, it’s operating in a kind of a huge, closed circuit. So, I return to my first point: Writing leads to thinking; it’s not an end product, it’s a process that stimulates thought.
GPT-4, “OpenAI’s most advanced system that produces safer and more useful responses”
When students allow AI to write their essays, they miss out on the essential learning that comes from engaging with the writing process. Writing helps develop critical thinking, encourages deeper understanding of the subject matter, and strengthens communication skills.
By bypassing this process, students lose the opportunity to clarify their own ideas, learn how to structure arguments, and refine their unique voice. They also risk not truly grasping the content, since meaningful learning often happens during the struggle to organize thoughts and express them clearly.
Additionally, using AI to write work intended to be original can raise ethical concerns and prevent students from receiving useful feedback that could help them grow. In the long run, relying entirely on AI can undermine both academic development and personal confidence.
Science & Tech
Why are you cursing?
Photo illustration by Liz Zonarich/Harvard Staff
June 2, 2025
4 min read
Steven Pinker breaks down history of taboo words, different categories of swearing, and the meaning conveyed by a bleep
Part of the
Wondering
series
A series of random questions answered by Harvard experts.
Steven Pinker, the Johnstone
A series of random questions answered by Harvard experts.
Steven Pinker, the Johnstone Family Professor of Psychology, wrote a chapter on swearing for his book “The Stuff of Thought: Language as a Window into Human Nature” (2007).
The content of swear words varies from culture to culture, from language to language, and across time periods, as we see in the fact that damn and hell used to be highly fraught taboo words in English. Part of that has to do with the decline of religious sensibilities. In an era in which people literally thought that God was monitoring every word and were worried about spending eternity in hell, a curse that damned you to hell would have had more of an impact than it does today. Likewise with sexuality. Before the sexual revolution, words like f*ck and pr*ck were far more offensive than they are today, though they still retain the taboo status. Cultural norms and taboos give rise historically to swear words.
The common denominator of swear words is that they evoke a negative emotion, whether it be dread of the supernatural, as in religious taboo words; disgust at bodily secretions; revulsion at depraved sexual acts; or a hatred of certain disfavored or marginalized people.
The act of swearing indicates that someone is prepared to inflict discomfort, but it can also be used to express informality; a setting where we don’t have to watch what we say. A sweeping cultural change that began around the turn of the 20th century eroded formality, inhibition, and decorum in all spheres of life in favor of authenticity, self-expression, and spontaneity. This informalization extended to a relaxing of the inhibition against swearing.
Among the types of swearing, there is dysphemistic swearing. Dysphemism is the opposite of euphemism, a word chosen to avoid arousing emotion in your listener. Dysphemism is a word selected precisely because it arouses emotion, as when you say, “Will you please pick up your dog sh*t?” because you want to convey your anger and revulsion. Abusive swearing is when you use negative emotion to humiliate or intimidate someone by likening them to an unpleasant or at least emotionally fraught object, as when you call someone a d*ck or accuse them of engaging in undignified sexual activities. There is cathartic swearing, as when you hit your head on a cupboard door, and there is idiomatic swearing, as in, “get your sh*t together,” “pissed off,” “a pain in the ass.”
There are also truncated profanities, where sometimes people will stifle the urge to utter a taboo word by using a word that rhymes with or alliterates with the taboo word. Like geez, gee-whiz for Jesus Christ, or gosh, golly for God. One other feature of taboo words is the use of hyphens and asterisks. The use of asterisks raises the question: Who’s being fooled by writing f*ck, or when something is bleeped out on TV? It’s not so much the concept behind the word, but it is the very act of using the word that both speaker and hearer understand to be an intentional attempt at shocking or transgressing. What the truncated profanity does is withdraw that intention but keep all else constant. It signals “I am conspicuously and deliberately not trying to offend you,” something that ordinarily is the whole point of swearing.
I think swearing ought to be used judiciously. The potent use of taboo language depends on it not being used in every sentence, but if it’s something that you can hold and reserve for when you want to shock an audience or call attention to the dangerous or evocative aspect of something. You might want to keep the powder dry, as they say. I find it rather tedious when people gratuitously use f*ck as if it were the only means at their disposal to emphasize a point. With the English language having some half a million words, it’s just more pleasing if people dip into the lexicon and find the best word, instead of just falling back on the easy taboo word.
Science & Tech
Science that gives humans more say over their destinies
Baby KJ with doctors after being treated for a rare genetic disorder using CRISPR technology.Photos by Children’s Hospital of Philadelphia
Yahya Chaudhry
Harvard Correspondent
June 2, 2025
5 min read
David Liu’s gene-editing technologies demonstrate game-changing potential in two recent cases
David Liu has been ha
Science that gives humans more say over their destinies
Baby KJ with doctors after being treated for a rare genetic disorder using CRISPR technology.
Photos by Children’s Hospital of Philadelphia
Yahya Chaudhry
Harvard Correspondent
5 min read
David Liu’s gene-editing technologies demonstrate game-changing potential in two recent cases
David Liu has been having a good run.
On May 15, the New England Journal of Medicine published what has become a high-profile case of a 5-month-old boy called KJ with a deadly genetic disorder, who became the first to receive a personalized CRISPR gene-editing treatment. The therapy was built on base editing, a technology developed in Liu’s lab nearly a decade ago.
The therapy was created to correct the single-letter genetic mutation that was shutting down KJ’s ability to eliminate ammonia from his liver, a condition known as CPS1 deficiency. Half of the babies with this disorder die in the first week of life. KJ not only survived but began recovering.
This was the first of two recent landmark events involving science developed by Liu, the Thomas Dudley Cabot Professor of the Natural Sciences, and his team. Liu, who is also an HHMI investigator, won the 2025 Breakthrough Prize for his work on base editing and prime editing, technologies that enable the correction or replacement of virtually any genetic mutation.
“There’s a lot of confidence in base editing technology based on the 17 previous base editing clinical trials and thousands of research publications, but it doesn’t change the fact that you still realize the stakes are very high for this patient and their family,” said Liu of KJ’s case. “This story is a powerful testament to the fact that the editing technology, the delivery technology, the manufacturing, the animal models, and the regulatory approval are all robust enough to make this huge team effort happen fast enough to save Baby KJ.”
A chemical biologist by training, Liu has long believed that scientific discovery gives humans the opportunity to have more say over their own lives.
“It’s easy to forget that every translation of science into a societal benefit began as a basic science project.”
David Liu, Thomas Dudley Cabot Professor of the Natural Sciences
Just a week after KJ’s story made headlines, Prime Medicine, a biotech company Liu co-founded, announced the first clinical results of its therapy for chronic granulomatous disease (CGD), a rare and severe immune disorder.
For the first time, an adult human had been treated with a prime-edited medicine. Prime editing enables even more versatile and precise rewriting of DNA.
The patient was missing two base pairs in a particular gene that encodes an enzyme essential for immune system function. The adult patient responded positively: Early data showed restoration of the missing enzyme activity and no serious adverse effects.
“This is the Alyssa Tapley moment for prime editing,” Liu said, referring to the first patient successfully treated with base editing in the U.K. “The scientists at Prime Medicine succeeded in taking out the patient’s bone marrow, editing it with prime editing, and then transplanted the patient’s edited bone marrow back into his body.
“Until now, the only curative treatment for CGD is transplantation of another person’s bone marrow into the patient, which puts patients at significant risk of rejection or of graft-versus-host disease in which the donor’s immune system attacks the patient’s own tissue,” he added. “So prime editing provides an especially elegant and effective solution, by prime editing the patient’s own bone marrow to fix the disease-causing deletion and then returning it to the patient.”
In KJ’s case, the treatment was developed on an emergency timeline by a coalition of academic researchers and companies across the U.S., led by University of Pennsylvania physician scientists Kiran Musunuru and Rebecca Ahrens-Nicklas.
University of Pennsylvania physician scientists Kiran Musunuru and Rebecca Ahrens-Nicklas hold KJ after his treatment.
Liu’s lab played a vital role in developing base editing almost exactly eight years ago and advising the team on which base editor would be likeliest to correct KJ’s mutation effectively while minimizing the risk of unwanted side effects.
“It normally takes many years to go from a genetic diagnosis of a new mutation to a clinical treatment,” Liu said. “This time, it happened in less than seven months.”
While base editing and prime editing technologies each offer their own strengths, their shared promise is a future in which previously untreatable genetic diseases can be reversed with a bespoke treatment.
Prime editing, introduced by Liu’s group in 2019, acts like a molecular word processor, capable of search-and-replace corrections to DNA. It opens the door to treating thousands of genetic mutations behind conditions like sickle cell disease, progeria, and Tay-Sachs.
Both KJ’s case and the CGD breakthrough illustrate what Liu has long championed: the translation of fundamental science into tools that can change lives — safely, swiftly, and equitably.
“Genetic diseases are a consequence of the chemical structure of our DNA, so they will always be a part of humanity. Our mission is to make possible a future in which these types of gene editing treatments are routine, so that people are no longer so beholden to the misspellings in our DNA,” Liu said. “We can finally have some say in our genetic features.”
Liu credited early grants from the National Institutes of Health and other public agencies for enabling high-risk, high-reward ideas like gene editing. Basic science and future breakthroughs are in grave danger due to Washington’s cuts to scientific research, Liu warned.
“It’s easy to forget that every translation of science into a societal benefit began as a basic science project where initially there may not have been any obvious pathway to benefit society,” Liu said. “A basic science investigation into repetitive sequences of DNA found in bacteria turned into the discovery of CRISPR and now into dozens of uses of gene editing to benefit patients with terrible diseases, demonstrating the critical importance of basic science to humanity. Basic science must be supported if we want our children to have the opportunity to live better lives.”