AI, Science, and Society: Insights from Global Experts 

“AI, Science, and Society” conference (Credits: Bachelor of Science - École Polytechnique's Facebook Post)

The “AI, Science, and Society” conference took place on February 6 and 7, 2025, at the Campus of Institut Polytechnique de Paris at École Polytechnique in Paris, France. This two-day event featured globally recognized AI experts and fostered critical dialogue at the intersection of technological innovation, interdisciplinary scientific research, and societal transformation. These discussions served as a precursor to the “AI Action Summit,” sponsored by the French government, which aimed to strengthen France’s position in leading AI development and to chart the course of local and regional AI ecosystems and solutions.

This blog focuses on the program for February 6, 2025, which included a keynote address by Michael Jordan; a plenary session featuring Eric Xing, Emmanuel Candès, Asuman Özdağlar, and Joëlle Barral that explored the theme “AI at an Inflection Point;” a symposium covering topics from the mathematics of machine learning to the road to trustworthy AI; and a final plenary featuring Bernhard Schölkopf and Yann LeCun, followed by a fireside chat with Michael Jordan, Yann LeCun, Bernhard Schölkopf, and Stéphane Mallat. 

I followed this event online and watched the talks on YouTube out of curiosity about how global experts view the possibility of AI replacing humans or entire professions, such as teachers and doctors. The observations and reflections I share here stem from this interest and my desire to understand AI from a global and European perspective while living and working within a North American context.

Prediction Without Understanding, Context, or Agency

Professor Eric Xing noted that current large language models (LLMs) primarily focus on next-word prediction rather than genuine understanding. He argued that these models are not equipped to handle tasks requiring physical interaction or skills difficult to articulate in language, such as swimming or navigating traffic. To advance AI beyond these limitations, he introduced the concept of a “world model” and “agents” as a potential future direction for AI—one that could enable broader reasoning and action. Xing also emphasized that current AI architectures fail to address fundamental concepts such as agency, free will, and morality. 

The notion of a super Artificial General Intelligence (AGI) that knows everything and dictates human actions was challenged by Professor Michael Jordan. He argued that such an AI would lack awareness of an individual’s specific context, needs, or desires. This suggests that professions requiring personalized guidance and human understanding—such as teaching and medicine—will continue to rely on human expertise.

Jordan also pointed out that much of the current hype around AI stems from the resurgence of the term “AI” following the development of LLMs, which are essentially scaled-up versions of previous machine learning architectures centered on prediction. While these models are powerful, he stressed that they do not yet represent the human-like intelligence originally envisioned in AI’s early days.

Human Augmentation, Not Replacement

The challenge of establishing a “ground truth” for AI evaluation in fields like medicine was a key discussion point. Experts highlighted the importance of human judgment and the complexities involved in decision-making. Speakers also highlighted the emotional and social dimensions of AI adoption, emphasizing that public reactions to AI failures are often more negative and scrutinized than human errors. This tendency reflects deep-seated concerns about trust and accountability in AI-driven decisions, particularly in education and other critical fields. As a result, maintaining human oversight remains essential to ensure ethical implementation and build confidence in AI-powered tools.

There was a general consensus that AI is more likely to serve as a tool for augmenting human capabilities rather than replacing humans entirely. Joëlle Barral reinforced this perspective, stating that Google DeepMind’s mission is to develop AI responsibly “to benefit humanity.” Examples such as AlphaFold aiding biological research and AI assisting in medical diagnosis illustrate how AI can enhance human expertise and accelerate discovery rather than eliminate the need for scientists or doctors.

Xing envisioned AI as a means to “reprogram the university” by preparing trainees for jobs in an evolving landscape, suggesting that AI will transform roles rather than make them obsolete. Jordan proposed an “economic model” where humans and machines interact in a market-like environment, promoting collaboration rather than replacement. He advocated for AI development that focuses on connecting people and augmenting their abilities rather than substituting them

The Future Requires Human Skills

Laura Chaubard, Director-General of École Polytechnique, emphasized that navigating AI’s “deep transformation” requires “the guiding lights of science and education.” She stressed that institutions like École Polytechnique will continue to play a crucial role in driving not only technological innovation but also societal progress, reinforcing the enduring importance of education and human intellect.

Jordan further emphasized the necessity of interdisciplinary collaboration—bringing together expertise from computer science, statistics, and economics—to develop AI systems that are robust, beneficial, and well-integrated into society. This highlights the indispensable role of human knowledge across diverse fields in shaping AI’s future.

Conclusion: AI as a Transformative Partner, Not a Replacement

While AI is advancing rapidly and reshaping various industries, its current limitations—such as a lack of true understanding, agency, and contextual awareness—highlight the continued need for human expertise, ethical considerations, and oversight. These factors, along with concerns over data integrity and trust, indicate that AI will likely serve as a tool for augmenting human capabilities rather than entirely replacing roles like teachers and doctors. The emphasis on responsible AI development and human-AI collaboration reinforces the idea that AI should be seen not as a substitute for human intelligence but as a transformative partner in shaping the future of work and society.

Please cite the content of this blog:

Correia, A.-P. (2025, March 24). AI, Science, and Society: Insights from Global Experts. Ana-Paula Correia’s Blog. https://www.ana-paulacorreia.com/blog/ai-science-and-society-insights-from-global-experts

Next
Next

The Ever-Present Companion: Understanding Nomophobia and Smartphone Dependence