Keynote Talks

Niloy Mitra

Professor, Dept. of Computer Science
University College London

Towards Iterative and Predictable Workflows for Creative Authoring

Abstract: Many professionals increasingly rely on LLMs and MLLMs, yet current tools behave like black boxes: impressive, hard to steer, and even harder to trust in production. In this talk, I will outline our efforts to iterative and predictable AI-assisted authoring, where models become reliable “helping hands” rather than opaque oracles. I’ll share design patterns and training strategies that make multimodal systems sensitive to intent, controllable over time, and auditable in outcomes.

Speicifically, I’ll present (i) MonetGPT, an image-editing agent that turns natural language into structured edit plans, executes them stepwise, and exposes interpretable checkpoints for reversible iteration; (ii) LAMP a method for directorial camera control in video and 3D, training MLLMs to map high-level cinematic cues (“dolly in on the protagonist, keep horizon level”) to parameterized rigs with guarantees about continuity; (iii) CAD model editing via mixed symbolic–geometric reasoning, enabling constraint-aware revisions from brief natural prompts. Time permitting, I will also talk about our latest efforts towards shape tokenization. Join us on our journey to merge generative with predictable workflows.

More details at: https://geometry.cs.ucl.ac.uk/

Bio: Niloy J. Mitra leads the Smart Geometry Processing group in the Department of Computer Science at University College London and the Adobe Research London Lab. He received his Ph.D. from Stanford University under the guidance of Leonidas Guibas. His research develops machine learning frameworks for generating high-quality geometric and functional content in computer graphics applications. He has received several recognitions, including the ACM SIGGRAPH Significant New Researcher Award (2013), the BCS Roger Needham Award (2015), and the Eurographics Outstanding Technical Contributions Award (2019). He was elected a Eurographics Fellow in 2021, served as Technical Papers Chair for SIGGRAPH in 2022, and was inducted into the SIGGRAPH Academy in 2023. Beyond research, Niloy is an avid DIYer and enjoys reading, cricket, and cooking.

Payel Das

Manager and Principal, Research Staff Member
IBM Research

Foundation Models for Advancing Biology

Abstract: Foundation models are rapidly reshaping the landscape of computational biology, offering unified frameworks that learn from vast and heterogeneous biological data. In this talk, I will discuss recent advances that push these models in protein modeling using sequence, structure, and dynamics data. First, I introduce Representation Reprogramming, a strategy for sequence-based protein representation learning that enables strong generalization in low-data regimes by adapting pretrained models to new biological tasks with minimal supervision.

I will then explore how we can move beyond sequence alone by incorporating structural information through geometric encoder training and inverse folding with structural consistency, both of which enhance a model’s ability to capture the three-dimensional constraints underlying protein function. Finally, I will highlight emerging approaches for modeling protein dynamics using physics-informed techniques, focusing on energy-based alignment as a means to encode conformational landscapes and guide predictions toward physically meaningful states. Together, these developments illustrate how next-generation protein foundation models can unify sequence, structure, and dynamics to accelerate discovery across the biological sciences.

Bio: Dr. Payel Das is a Principal Research Staff Member and Manager at IBM Research AI in New York, with deep expertise across machine learning, physics, biology, chemistry, and neuroscience. A visionary scientist and technology leader, she is dedicated to advancing innovation responsibly and developing scalable AI systems that serve science and society. Her work has driven key advances in trustworthy AI, next-generation generative AI (GenAI), and AI systems for scientific discovery and social good.

At IBM Research, Dr. Das has held several leadership positions, including Research Scientist, Technical Theme Lead, and Chief Scientist. She also serves as Adjunct Associate Professor in the Department of Applied Physics and Applied Mathematics at Columbia University. Previously, she was a Visiting Fellow at UCLA’s Institute of Pure and Applied Mathematics and a Visiting Student at Princeton University. Dr. Das earned her Master’s degree from the Indian Institute of Technology, Chennai, where she conducted research in Density Functional Theory, and her Ph.D. from Rice University in 2007, focusing on modeling protein landscapes through statistical physics and machine learning. As a Postdoctoral Fellow at IBM Research, she applied high-performance computing to free energy perturbation simulations for studying disease mechanisms. Her research aims to accelerate discovery in the natural sciences by developing state-of-the-art chemical and biomolecular foundation AI models that enable faster identification of disease markers, therapies, and materials. She has also designed brain-inspired language models with adaptive memory for improved reasoning and learning, and architected open-source, cloud-based GenAI systems that promote responsible AI democratization. Her work includes developing methodologies for robust length generalization, fine-tuning, and alignment of transformer-based LLMs. Das is also interested in metascience and has proposed "Coopetition" as a mechanism for constructing community-based discovery framework.

Dr. Das holds over 60 peer-reviewed publications and 60 patent disclosures and was named an IBM Master Inventor in 2021. She frequently delivers keynote lectures and serves on editorial boards, program committees, and advisory panels. Collaborating with agencies such as DOE, NIST, and NSF, she has contributed to shaping ethical and safe AI for science. Recognized by the Harvard Belfer Center and IEEE, she is also a strong advocate for diversity in STEM, mentoring early-career professionals and students, especially women, in technology and science.

B. Ravindran

Head, Department of Data Science and AI
IIT Madras

Benchmarking Responsibility: Why AI Safety Requires Indigenous Evaluation Frameworks

Abstract: AI systems often fail catastrophically in real-world deployments, yet rigorous evaluation frameworks remain sparse—particularly for contexts outside the Global North. This talk examines why measurement and testing are critical for responsible AI development, and then argues for context-specific benchmarks that address India's unique challenges, including caste-based stereotypes, linguistic diversity, and resource constraints.

We present three complementary studies. **LExT** proposes a trustworthiness framework for evaluating LLM-generated explanations in healthcare, revealing stark differences in faithfulness across models. **IndiCASA** introduces the first large-scale dataset capturing Indian-specific biases across caste, gender, religion, disability, and socioeconomic axes—exposing persistent stereotypes even in supposedly debiased models. Finally, our AACL work demonstrates how educational LLMs systematically disadvantage the Global South, reinforcing geographic and demographic inequities.

Together, these works underscore an urgent imperative: AI safety and fairness cannot be one-size-fits-all. Building trustworthy AI for diverse populations requires local expertise, culturally-grounded evaluation, and proactive bias measurement.

Bio: Professor B. Ravindran heads the Department of Data Science and Artificial Intelligence (DSAI), the Wadhwani School of Data Science and Artificial Intelligence (WSAI) the Robert Bosch Centre for Data Science & Artificial Intelligence (RBCDSAI) and the Centre for Responsible AI (CeRAI) at IIT Madras.

Currently, his research interests are centred on learning from and through interactions and span the areas of geometric deep learning and reinforcement learning. Additionally, he is involved with the Centre for Responsible AI (CeRAI), where his work aims to promote the responsible development and deployment of AI technologies across various domains, as well as ensuring that they are transparent, fair, and aligned with societal values.

He received his PhD from the University of Massachusetts, Amherst and his Master’s degree from the Indian Institute of Science, Bangalore. He is an elected fellow of the Association for Advancement of AI (AAAI) and Indian National Academy of Engineering (INAE), as well as an ACM Distinguished Member.

Mohit Bansal

Parker Distinguished Professor, Computer Science Department
University of North Carolina, Chapel Hill

Trustworthy Planning Agents for Collaborative Reasoning and Multimodal Generation

Abstract: In this talk, I will present our journey of developing trustworthy and adaptive AI planning agents that can reliably communicate and collaborate for uncertainty-calibrated reasoning (across diverse domains such as math, commonsense, coding, and tool-use) and for interpretable, controllable multimodal generation (across diverse modalities such as text, images, videos, audio, layouts, etc.).

In the first part, we will discuss: (1) how to teach agents to be trustworthy and reliable collaborators via social/pragmatic multi-agent interactions (e.g., confidence calibration via speaker-listener reasoning and learning to balance positive and negative persuasion), as well as (2) how to acquire and improve agent skills needed for efficient and robust perception and action (e.g., learning reusable, verified abstractions over actions & code, adaptive data generation based on discovered weak skills, and world model inference).

In the second part, we will discuss interpretable and controllable multimodal generation via LLM-agents based planning and programming, such as (1) layout-controllable image generation and evaluation via visual programming, (2) consistent video generation via LLM-guided multi-scene planning, automatic targeted refinement, retrieval-augmented motion adaptation, and embodied physically-consistent verification, (3) interactive, composable any-to-any multimodal generation. We will conclude with examples of improving real-world applications such as medical data reasoning and classroom education engagement.

Bio: Dr. Mohit Bansal is the John R. & Louise S. Parker Distinguished Professor and the Director of the MURGe-Lab (UNC-AI Group) in the Computer Science department at UNC Chapel Hill. He received his PhD from UC Berkeley in 2013 and his BTech from IIT Kanpur in 2008. His research expertise is in multimodal generative models, reasoning and planning agents, faithful language generation, and interpretable, efficient, and generalizable deep learning. He is a Fellow of AAAI and ACL, and recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), IIT Kanpur Young Alumnus Award, DARPA Director's Fellowship, NSF CAREER Award, Google Focused Research Award, Microsoft Investigator Fellowship, Army Young Investigator Award (YIP), DARPA Young Faculty Award (YFA), and outstanding paper awards at ACL, CVPR, EACL, COLING, CoNLL, and TMLR. He has been a keynote speaker for the ECAI 2025, ACM-CODS 2025, AACL-IJCNLP 2023, CoNLL 2023, and INLG 2022 conferences. His service includes EMNLP Program Co-Chair, CoNLL Program Co-Chair, and ACL Executive Committee, ACM Doctoral Dissertation Award Committee, ACL Doctoral Dissertation Award Co-Organizer, ACL Mentorship Program Co-Founder, Associate Editor-in-Chief for TPAMI, and Associate Editor for TACL, CL, IEEE/ACM TASLP, and CSL journals.
Webpage: https://www.cs.unc.edu/~mbansal/

Shivkumar Kalyanaraman

ANRF,
Govt of India

AI For Science & Engineering and ANRF Vision

Abstract: AI has tremendous potential to accelerate scientific discovery, the journey from “deep science” to “deep tech” and the translation of scientific & engineering research to applied products. Different categories of techniques have recently found applications in different parts of the scientific workflows.

AI techniques are likely to drive fundamental changes in how HPC& numerical simulations are done, combinatorial exploration (eg: computational chemistry or biology using diffusion techniques) happens in conjunction with appropriate experimental validation, how design/modelling and operational digital twins will unfold, and how high throughput experimentation along with AI-driven experiment design and virtual lab experimentation is done. It is important to drive up adoption of AI by the scientific and engineering community, and simultaneously evolve both the tools and reimagine the workflows of science/engineering and translation.

Anusandhan National Research Foundation (ANRF) has been recently established as astatutory body with a governing board chaired by the Hon’ble Prime Minister of India tocatalyse the transformation in Research and Innovation across all stakeholders in India.The talk will also outline the vision, principles and high level approaches ANRF will use todrive collaborationsand co investments across stakeholders, and especially around amission-mode MAHA program on AI for Science and Engineering. We invite deepercommunity participation from all stakeholders, and especially industry, academia, labs,foundations/philanthrophy to come together with ANRF on the journey to Viksit Bharat.

Bio: Shiv has been appointed by Hon’ble Prime Minister of India as CEO, Anusandhan National Research Foundation (ANRF). He was previously CTO, Energy Industry, Asia at Microsoft. Previously he was Executive General Manager of Growth Offerings at GE Power Conversion responsible for new Line of Business development in e-Mobility, Commercial & Industrial Solar and digital/AI innovations. Earlier he was at IBM Research - India, and the Chief Scientist of IBM Research - Australia. Before IBM, he was a tenured Full Professor at Rensselaer Polytechnic Institute in Troy, NY, USA. Shiv has degrees from Indian Institute of Technology, Madras (B.Tech, CS), Ohio State University (MS, PhD) and RPI (Executive MBA). Shiv is a Distinguished Alumnus Awardee of IIT Madras (2021, recognizing 0.3% of IITM’s alumni over the years) & Ohio State University (2021), Fellow of the IEEE (2010), Fellow of Indian National Academy of Engineering (2015), ACM Distinguished Scientist (2010), Microsoft Gold Club (2024), MIT Technology Review TR100 young innovator (1999).