### 10th World Congress in Probability and Statistics

## Invited Session (live Q&A at Track 3, 10:30PM KST)

## Analysis of Dependent Data (Organizer: Chae Young Lim)

### Statistical learning with spatially dependent high-dimensional data

Taps Maiti (Michigan State University)

### Large-scale spatial data science with ExaGeoStat

Marc Genton (King Abdullah University of Science and Technology (KAUST))

### Multivariate spatio-temporal Hawkes process models of terrorism

Mikyoung Jun (University of Houston)

This is joint work with Scott Cook.

### Q&A for Invited Session 11

###### Session Chair

Chae Young Lim (Seoul National University)

## Randomized Algorithms (Organizer: Devdatt Dubhashi)

### Is your distribution in shape?

Ronitt Rubinfeld (Massachusetts Institute of Technology)

*monotone*if for any two comparableelements $x < y$ in the domain, we have that $p(x) < p(y)$. For example, for the classic $n$-dimensional hypercube domain, in which domain elements are described via $n$ different features, monotonicity implies that for every element, an increase in the value of one of the features can only increase its probability. We recount the development over the past nearly two decades of {\em monotonicity testing} algorithms for distributions over various discrete domains, which make no a priori assumptions on the underlying distribution. We study the sample complexity for testing whether a distribution is monotone as a function of the size of the domain, which can vary dramatically depending on the structure of the underlying domain. Not surprisingly, the sample complexity over high dimensional domains can be much greater than over low dimensional domains of the same size. Nevertheless, for many important domain structures, including high dimensional domains, the sample complexity is sublinear in the size of the domain. In contrast, when no a priori assumptions are made about the distribution, learning the distribution requires sample complexity that is linear in the size of the domain.The techniques used draw tools from a wide spectrum of areas, including statistics, optimization, combinatorics, and computational complexity theory.

### Beyond independent rounding: strongly Rayleigh distributions and traveling salesperson problem

Shayan Oveis Gharan (University of Washington)

### A survey of dependent randomized rounding

Aravind Srinivasan (University of Maryland, College Park)

### Q&A for Invited Session 19

###### Session Chair

Devdatt Dubhashi (Chalmers University)

## Stochastic Partial Differential Equations (Organizer: Leonid Mytnik)

### Phase analysis for a family of stochastic reaction-diffusion equations

Carl Mueller (University of Rochester)

### Regularization by noise for SPDEs and SDEs: a stochastic sewing approach

Oleg Butkovsky (Weierstrass Institute)

### Stochastic quantization, large N, and mean field limit

Hao Shen (University of Wisconsin-Madison)

(Joint work with Scott Smith, Rongchan Zhu and Xiangchan Zhu.)

### Q&A for Invited Session 23

###### Session Chair

Leonid Mytnik (Israel Institute of Technology)

## Pathwise Stochastic Analysis (Organizer: Hendrik Weber)

### Sig-Wasserstein Generative models to generate realistic synthetic time series

Hao Ni (University College London)

This is the joint work with Lukasz Szpruch (Uni of Edinburgh), Magnus Wiese (Uni of Kaiserslautern), Shujian Liao (UCL), Baoren Xiao (UCL).

### State space for the 3D stochastic quantisation equation of Yang-Mills

Ilya Chevyrev (University of Edinburgh)

Based on a joint work in progress with Ajay Chandra, Martin Hairer, and Hao Shen.

### A priori bounds for quasi-linear parabolic equations in the full sub-critical regime

Scott Smith (Chinese Academy of Sciences)

### Q&A for Invited Session 26

###### Session Chair

Hendrik Weber (University of Bath)