Language: 简体中文 English

Keynote

Your present position:
Home Keynote

Keynote speaker 1:

    Tianxi Cai, Harvard University

 

    Tianxi Cai is a major player in developing analytical tools for mining EHR data and predictive modeling with biomedical data. She provides statistical leadership on several large-scale projects, including the NIH-funded Undiagnosed Diseases Network at DBMI. Cai's research lab develops novel statistical and machine learning methods for several areas including clinical trials, real world evidence, and personalized medicine using genomic and phenomic data. Cai received her ScD in Biostatistics at Harvard and was an assistant professor at the University of Washington before returning to Harvard as a faculty member in 2002.

https://www.hsph.harvard.edu/profile/tianxi-cai/

Report Title: Robust Consensus Learning with Multi-Institutional EHR Data: Opportunities and Challenges



Keynote speaker 2

    Will Cong, Cornell University

    Will Cong is the Rudd Family Professor of Management, professor of finance, and founding director of the FinTech Initiative at Cornell University. He is also a finance editor at the Management Science, faculty scientist at the Initiative for Cryptocurrencies & Contracts (IC3), research associate at the NBER, founder of multiple international research forums, a former Kauffman Junior Fellow, Poets & Quants World Best Business School Professor, and 2022 Top 10 Quant Professor.   
    Cong's research spans financial economics, information economics, fintech, digital economy, and entrepreneurship. He and his coauthors have pioneered the introduction of goal-oriented search and interpretable AI for finance, laid the foundations of tokenomics (covering categorization of tokens, cryptocurrency pricing, central bank digital currencies/payment systems, and optimal token monetary policy design), analyzed centralization issues and dynamic incentives in blockchains and DeFi, and developed data analytics for detecting market manipulation and better fintech regulation among others.

https://business.cornell.edu/faculty-research/faculty/lc898/

Report Title: Generative AI for Economic Equilibrium Analyses and Financial Applications

Abstract Content: I overview two core themes of modern AI: goal-oriented end-to-end optimization and generative modeling with deep learning. I then discuss non-text-based generative modeling involving transformer-based reinforcement learning or the novel panel trees for portfolio management, test asset creation, and detecting heterogeneity groups (e.g., asset clusters with differential return predictability). Finally, I introduce the concept of data-driven generative equilibrium for counterfactual analyses in economics, with an application to online lending markets.



Keynote speaker 3:

    Weinan E, Peking University

 

    A renowned mathematician, Academician of the Chinese Academy of Sciences, Chair Professor at Peking University, Director of the National Engineering Laboratory for Big Data Analysis and Applications, Director of the International Center for Machine Learning at Peking University, President of the Beijing Institute of Scientific Intelligence, President of the Beijing Big Data Research Institute, Advisor to the Chinese Society for Industrial and Applied Mathematics, and Chair of the Academic Committee at the Wuhan Institute of Mathematics and Intelligence. He was formerly a professor in the Department of Mathematics and at the Program in Applied and Computational Mathematics at Princeton University.
    His primary research areas include computational mathematics, applied mathematics, machine learning, and their applications in fields such as mechanics, physics, chemistry, and materials science.

https://www.math.pku.edu.cn/jsdw/js_20180628175159671361/e_20180628175159671361/138270.htm

Report Title: Towards an understanding of the principles behind deep learning



Keynote speaker 4:

    James M. Robins, Harvard University


    James M. Robins is an epidemiologist and biostatistician best known for advancing methods for drawing causal inferences from complex observational studies and randomized trials, particularly those in which the treatment varies with time. He is the 2013 recipient of the Nathan Mantel Award for lifetime achievement in statistics and epidemiology, and a recipient of the 2022 Rousseeuw Prize in Statistics, jointly with Miguel Hernán, Eric Tchetgen-Tchetgen, Andrea Rotnitzky and Thomas Richardson.

    He graduated in medicine from Washington University in St. Louis in 1976. He is currently Mitchell L. and Robin LaFoley Dong Professor of Epidemiology at Harvard T.H. Chan School of Public Health. He has published over 100 papers in academic journals and is an ISI highly cited researcher.

https://www.hsph.harvard.edu/profile/james-m-robins/ 

Report Title: Causal Inference : History, Contributions, and Future

Abstract Content: Forty years ago, the following disciplines had their own languages, opinions and idiosyncrasies re causal inference: philosophy, computer science, sociology, psychology, statistics, epidemiology, political science, and economics. Today all speak a common language so new methodologies rapidly cross fertilize. Top journals have gone from knee-jerk rejection to active solicitation of articles in the area.     

The rapid development of the field has been driven by:

1.End of the historical suppression of causal language in statistics and medicine (aside from randomized clinical trials)

2.The internet making cross disciplinary understanding and collaboration easy

3.The need for individualized treatment regimes in Medicine

4. Tech companies realizing that optimizing profits depended on causal interventions rather than just prediction

5.The development of causal graphs by Spirtes, Glymour, Scheines and Pearl that offers non-technical users the ability to validly reason  about complex causal systems

6.The existence of huge data sets leading to data driven science rather than hypothesis driven science.

In my lecture, I will give a history of statistical methods for causal inference, focusing on methods developed by myself and colleagues. I will explain why the causal methods we developed for the analysis of time varying treatments have had such a large impact for now over 25 years on substantive areas in which confounding by time varying covariates is very strong, as in studies of HIV-infected individuals. In addition, I will describe why these methods are an integral part of the target trial methodology introduced by Miguel Hernan and myself - a methodology that is altering the analytical paradigm for the estimation of causal effects from longitudinal observational data in Medicine.  

In more detail, I will review both (i) the role of marginal structural models, structural nested models, and the g-formula in modelling the effects of time-varying treatments and (ii) the development, joint with Andrea Rotnitzky, of doubly and multiply robust estimation of the model parameters. This will be followed by a brief review of ground-breaking causal methods developed by other researchers, centering on the development of proximal inference by Eric Tchetgen Tchetgen and Wang Miao and the contributions of Mark van der Laan. I will conclude with a discussion of the future of causal inference in the coming age of AI.



Keynote speaker 5:

    C. F. Jeff Wu, Chinese University of Hong Kong, Shenzhen

 

    An internationally renowned statistician and member of the U.S. National Academy of Engineering, he is the Xueqin Chair Professor and Dean of the School of Data Science at the Chinese University of Hong Kong, Shenzhen. Previously, he served as the H.C. Carver Professor of Statistics at the University of Michigan, a Professor in the School of Industrial and Systems Engineering at the Georgia Institute of Technology, and the Coca-Cola Chair Professor of Statistics. In 2011, he received the R.A. Fisher Lectureship. In 1997, he was the first to coin the term "Data Science," advocating for renaming statistics to data science and referring to statisticians as data scientists.

    His primary research focuses on applied mathematical sciences (statistics) and engineering sciences (quality engineering and industrial engineering).

https://sds.cuhk.edu.cn/en/teacher/1900

Report Title:  Physics-informed learning and uncertainty quantification

Abstract Content: Suppose a system or product can be described by its underlying physical knowledge, usually via a set of partial differential equations (pde’s). Then the corresponding AI tools must be informed by this knowledge. In this talk I will focus on using physical knowledge to “inform” the development of statistical/machine learning. I will use two recent work to illustrate how this can be done. In developing the injector design for rocket propulsion, the mixing of oxygen and fuel was critical for the stability of propulsion. Navier-Stokes equations were used to describe the mixing behavior. However, the solution of the equations could take six weeks on clustered machine. Thus only a small number of numerical solutions could be performed. A surrogate model was instead developed by using tools in uncertainty quantification (UQ) and machine learning (ML).  It could be computed within one hour and can mimic the mixing behavior. The success lies in using UQ-ML tools to incorporate known physical phenomena about mixing.  The second work addresses the issue of incorporating knowledge of the pde’s such as its boundary conditions. In building an efficient surrogate model, it must also satisfy the same boundary conditions. We have constructed a new class of Gaussian process models that incorporate the same boundary information. We develop a framework of GP models based on stochastic partial differential equations (SPDEs) with Dirichlet or Robin boundary conditions. For fast computation, we use a kernel regression approximation to accurately approximate the SPDE covariances. Real examples are used for illustration.


Keynote speaker 6:

    Zhihua Zhou, Nanjing University

    An expert in artificial intelligence research, he is currently the Vice President of Nanjing University, Dean of the School of Computer Science at Nanjing University, and concurrently serves as Dean of the Nanjing University School of Artificial Intelligence. He is also the Executive Deputy Director of the State Key Laboratory for Novel Software Technology and Director of the Institute of Machine Learning and Data Mining. Additionally, he is a Fellow of the European Academy of Sciences.

    His primary research areas include artificial intelligence, machine learning, and data mining.

https://www.nju.edu.cn/info/1040/372961.htm

Report Title: Abductive Learning: A New Paradigm of Artificial Intelligence Driven by Data and Knowledge

Abstract Content: The "data - driven" artificial intelligence based on machine learning has made remarkable progress. However, the issue of unguaranteed credibility has become increasingly prominent. Although "knowledge - driven" artificial intelligence is inherently inefficient in data utilization, it can ensure good correctness and interpretability based on logical reasoning. Establishing a new paradigm of artificial intelligence that integrates data - driven and knowledge - driven approaches poses a significant challenge to the development of artificial intelligence. This report will briefly present some preliminary explorations in this regard.

Contact Us

1. Conference Service

Contact Person: Ms. Su (Academic)

Phone: +86-571-88208268

E-mail: suweina@zju.edu.cn


Contact Person: Ms. Zhan (Hotel & Transportation)

Phone: +86-571-88177983; +86-18814880579

Email: qizhenhz@zjuyh.com


2. Finance (Payment & Receipt)

Contact Person: Ms. Cheng

Phone: +86-15990153584

Email: qizhenhz@zjuyh.com

Hello, I am your AI conference assistant! You can try to arrange the following tasks:
{{item.question}}

{{ai_type_list[item.type_index]?.name}}

Are you satisfied: Yes No
In deep thought
{{ '' == 'cn' ? ai_type_list[ai_type_index]?.name : ai_type_list[ai_type_index]?.name_en }}

{{ '' == 'cn' ? item.name : item.name_en }}

{{ '' == 'cn' ? item.desc : item.desc_en }}

send