Con el fin de obtener una alternativa canónica, Nash propuso a comienzos de la década de 1960 un proceso conjetural para resolver singularidades hoy conocido como el estallido de Nash. Una pregunta central en resolución de singularidades es si iterar esta construcción basta para resolver todas las singularidades. En un trabajo reciente en conjunto con Federico Castillo, Daniel Duarte y Maximiliano Leyton-Álvarez, demostramos que la respuesta es negativa para variedades de dimensión cuatro o superior.
En esta charla, daré una visión general tanto del problema de resolución de singularidades como de la aproximación mediante el estallido de Nash. La presentación está pensada para un público matemático amplio, incluyendo a quienes no tengan experiencia previa en geometría algebraica.
Imaginemos un tablero de ajedrez de N×N con una única pieza: un rey. Al comenzar, el rey se encuentra en una posición cualquiera y, en cada instante de tiempo n=1,2,3,…, se mueve al azar a una de las casillas vecinas permitidas.
¿Cuánto tiempo tardará el rey en visitar todas las casillas del tablero? Este tiempo es aleatorio, así que no podemos predecirlo con exactitud… pero, ¿podemos decir algo sobre su valor típico?
En esta charla exploraremos lo que se sabe sobre este problema, que —aunque en principio no lo parezca— es de interés en distintas áreas de la matemática (y, por el momento, sigue abierto). Veremos que, de manera sorprendente (¿o no?), aparece el número π en la respuesta, y trataré de convencerlos de que, para abordar este problema, se necesita saber cuántos paquetes de figuritas se deben comprar para lograr llenar un álbum de N figuritas. Aprovecharemos también para aprender a calcular esto último, por si alguna vez les toca completar uno.
El uso de distintos Sistemas de Gestión de Aprendizaje o plataformas educativas se ha convertido en una herramienta clave en el ámbito educativo. Estos sistemas generan diariamente un enorme volumen de datos tanto de estudiantes como de docentes. Transformar estos datos en información relevante para la toma de decisiones representa un gran desafío, debido a la complejidad de su estructura y a la dificultad de resumir el proceso de aprendizaje a partir de los registros disponibles.
En este trabajo se presentan métodos para transformar los datos de plataformas educativas en información relevante y explorar cómo esta puede utilizarse para predecir el desempeño académico en educación primaria pública en Uruguay. Se aplican métodos de aprendizaje estadístico Bayesianos para predecir rendimiento académico a partir de los patrones de uso de la plataforma Little Bridge, así como variables sociodemográficas y datos a nivel institucional. Específicamente, se utiliza el modelo BART (Bayesian Additive Regression Trees) y se compara su desempeño predictivo con Random Forest. El enfoque Bayesiano es seleccionado debido a la capacidad de incorporar efectos aleatorios a nivel de escuela, lo cual permite analizar los procesos de aprendizaje en múltiples niveles.
Los resultados pueden aplicarse tanto a nivel individual —para la identificación temprana de estudiantes en riesgo— como a nivel institucional, para destacar centros educativos que requieren intervención o aquellos que pueden servir como modelos de éxito.
The choice of a prior distribution is a key aspect of the Bayesian method. However, in many cases, such as the family of power links, this is not trivial. In this article, we introduce a penalized complexity prior (PC prior) of the skewness parameter for this family, which is useful for dealing with imbalanced data. We derive a general expression for this density and show its usefulness for some particular cases such as the power logit and the power probit links. A simulation study and a real data application are used to assess the efficiency of the introduced densities in comparison with the Gaussian and uniform priors. Results show improvement in point and credible interval estimation for the considered models when using the PC prior in comparison to other well-known standard priors.
The inverse elasticity problem can be simply stated as: given a deformed configuration and the forces that act on it, find an initial stress-free configuration such that when the given forces are applied to it, one recovers the given deformed configuration. Surprisingly, this problem can be framed as a (direct) elasticity one, whose mathematical properties are inherited from the original direct problem if the underlying material is sufficiently regular.
In this seminar, I will review this problem and its main mathematical properties. After this brief introduction, I will show some artifacts that appear when solving this problem, such as self-intersections and geometrically incompatible solutions. The talk will finish with an extension of this system to poroelastic materials, where I will show that the strong form of the equations does not allow for a weak formulation, and this requires some special treatment. All models will be shown to work in realistic heart geometries.
Scale-free networks play a fundamental role in the study of complex networks and various applied fields due to their ability to model a wide range of real-world systems. A key characteristic of these networks is their degree distribution, which often follows a power-law distribution, where the probability mass function is proportional to $x^{-\alpha}$, with $\alpha$ typically ranging between $2 < \alpha < 3$. In this talk, we introduce Bayesian inference methods to obtain more accurate estimates than those obtained using traditional methods, which often yield biased estimates, and precise credible intervals. Through a simulation study, we demonstrate that our approach provides nearly unbiased estimates for the scaling parameter, enhancing the reliability of inferences. We also evaluate new goodness-of-fit tests to improve the effectiveness of the Kolmogorov-Smirnov test, commonly used for this purpose. Our findings show that the Watson test offers superior power while maintaining a controlled type I error rate, enabling us to better determine whether data adheres to a power-law distribution. Finally, we propose a piecewise extension of this model to provide greater flexibility, evaluating the estimation and its goodness-of-fit features as well. In the complex networks field, this extension allows us to model the full degree distribution, instead of just focusing on the tail, as is commonly done. We demonstrate the utility of these novel methods through applications to two real-world datasets, showcasing their practical relevance and potential to advance the analysis of power-law behavior.
Assessing agreement between instruments is fundamental in clinical and observational studies to evaluate how similarly two methods measure the same set of subjects. In this talk, we present two extensions of a widely used coefficient for assessing agreement between continuous variables. The first extension introduces a novel agreement coefficient for lattice sequences observed over the same areal units, motivated by the comparison of poverty measurement methodologies in Chile. The second extension proposes a new coefficient, denoted as ρ1, designed to measure agreement between continuous measurements obtained from two instruments observing the same experimental units. Unlike traditional approaches, ρ1 is based on L1 distances, providing robustness to outliers and avoiding dependence on nuisance parameters. Both proposals are supported by theoretical results, an inference framework, and simulation studies that illustrate their performance and practical relevance.
Comenzamos a estudiar la Teoría de Grandes Desvíos, la cual establece resultados que permiten estudiar el decaimiento (exponencial) de probabilidades. El estudio de estas tecnicas lleva a adquirir conocimientos no solo probabilísticos, sino que también del análisis. En particular, de análisis convexo.
El objetivo de estas sesiones es revisar los resultados clásicos de la teoría y ver sus aplicaciones en áreas de la probabilidad como la Mecánica Estadística.
En esta sesión comenzamos revisando la definición de un principio de grandes desvíos. Veremos una instancia particular del Teorema de Cramer y la noción abstracta de grandes desvíos.
In the search for multivariate distributions that provide greater flexibility in modeling data characterized by high levels of skewness, kurtosis, and the presence of outliers, new families of multivariate distributions have emerged, among which multivariate normal mixture distributions stand out. In this context, we introduce a multivariate normal mixture distribution based on the Birnbaum-Saunders distribution and examine some of its key properties. To estimate the parameters of this normal scale mixture distribution, we propose a maximum likelihood approach implemented via the EM algorithm. To support inferential analyses, we derive the Fisher information matrix. Additionally, we formulate a linear hypothesis on the parameter vector of interest and evaluate it using the likelihood ratio, Wald, score, and gradient statistics. Finally, we illustrate the application of the proposed methodology to real datasets, complementing the analysis with a simulation study to assess its performance.
Regulators' procurement of renewable energy capacity is rapidly expanding. While stochastic optimization methods are typically employed to determine the optimal total capacity to procure, a smaller body of research explores an alternative framework borrowed from finance: portfolio optimization. In this study, we apply portfolio optimization to renewable energy , aiming to identify optimal portfolios that balance two objectives: maximizing energy production per dollar invested and minimizing the variance. We utilize principal component analysis (PCA) and other techniques to identify these portfolios under limited sample size. The proposed method is tested using historical Belgian production data spanning five years, with out-of-sample comparisons evaluating portfolio performance under real-world conditions.
The dynamics of a rain forest is extremely complex involving births, deaths and growth
of trees with complex interactions between trees, animals, climate, and environment. We
consider the patterns of recruits (new trees) and dead trees between rain forest censuses.
For a current census we specify regression models for the conditional intensity of recruits
and the conditional probabilities of death given the current trees and spatial covariates. We
estimate regression parameters using conditional composite likelihood functions that only
involve the conditional first order properties of the data. When constructing assumption
lean estimators of covariance matrices of parameter estimates we only need mild assumptions
of decaying conditional correlations in space while assumptions regarding correlations over
time are avoided by exploiting conditional centering of composite likelihood score functions.
Time series of point patterns from rain forest censuses are quite short while each point
pattern covers a fairly big spatial region. To obtain asymptotic results we therefore use a
central limit theorem for the fixed timespan - increasing spatial domain asymptotic setting.
This also allows us to handle the challenge of using stochastic covariates constructed from
past point patterns. Conveniently, it suffices to impose weak dependence assumptions on
the innovations of the space-time process. We investigate the proposed methodology by
simulation studies and an application to rain forest data.
Technological advances have transformed data collection and analysis, enabling the acquisition of large volumes of real-time information, commonly referred to as big data. In sectors such as the fishing industry, the adoption of modern technologies has introduced new challenges due to the excess of zeros in catch records, reflecting the natural variability in species abundance. In agriculture, satellite imagery has revolutionized crop monitoring, improving decisions related to plant health, resource management, and yield forecasting. Similarly, environmental monitoring using these technologies facilitates tracking of climate change and pollution, which is crucial for public health and sustainability. To address these issues, statistical models must account for spatial and spatio-temporal dependencies in the data, as well as the possibility of zero-inflation. In the literature, both Gaussian and (non)-Gaussian models have been developed for continuous or discrete data structures, but the excess zeros present significant challenges when modeling random fields. Techniques such as logarithmic transformations or constant adjustments have been proposed, though these often distort the data structure or are not feasible. Additionally, large-scale datasets present computational difficulties, as the high cost of likelihood methods often proves prohibitive. To mitigate this, methods such as composite likelihood have been employed, balancing statistical accuracy with computational efficiency in estimation. Furthermore, the concept of effective sample size (ESS) is essential for quantifying the information content in spatial datasets, addressing redundancy issues arising from spatial correlation.
The goal of this research is to propose a new class of continuous spatial and spatio-temporal (non)-Gaussian random fields with positive support and excess zeros. The study develops a hybrid composite likelihood function that combines block likelihood and pairwise likelihood methods to efficiently handle large-scale data estimation, while also aiding in the generation of the proposed random fields from bivariate distributions. Additionally, the effective sample size (ESS) will be defined within the context of this new class of random fields, with particular attention to assessing its asymptotic normality. The proposed methodology will be validated through simulations and comparisons with existing techniques. This work contributes to the advancement of statistical models for high-dimensional spatial and spatio-temporal data with excess zeros, providing an important tool for spatial data analysis in complex real-world scenarios.
Bayesian nonparametric (BNP) theory is well-developed for continuous random variables. For discrete data, the main BNP references fall into the Dirichlet Process (DP) or a Poisson DP mixture. DP does not allow smooth deviation from its base measure, while a Poisson mixture will never be able to fit under-dispersed data. However, assuming the existence of a continuous underlying variable can help transfer the continuous theory to a discrete one. Under this approach, the current project deals with developing a flexible regression model endowed with a model selection feature to identify the most relevant structure in context of binary, ordinal, and count data. Particularly, this project has three specific goals: 1) develop a latent dependent DP mixture model for light-tailed discrete data, 2) develop a latent dependent NGGP mixture model for heavy-tailed data, and 3) extend the two models to a multivariate case. It is hoped that the models will find the true model and fit better than those common in the literature when the sample size increases. This should hold for datasets with zero-inflated and under, equi, or over-dispersed behaviors.
We explore examples of Dirac operators on bounded domains exhibiting an interval of essential spectrum. In particular, we consider three-dimensional Dirac operators on Lipschitz domains with critical electrostatic and Lorentz scalar shell interactions supported on a compact smooth surface. Unlike typical bounded-domain settings where the spectrum is purely discrete, the criticality of these interactions can generate a nontrivial essential spectrum interval, whose position and length are explicitly controlled by the coupling constants and surface curvatures.
Based on joint work with J. Behrndt (TU Graz), M. Holzmann (TU Graz), and K. Pankrashkin (Univ. Oldenburg).
Torelli Theorem for K3 surfaces and its proof. Period map. Moduli of polarized K3 surfaces (the case of Kummer surfaces). Formulation of Torelli Theorem for IHS manifolds.
Heavy-tailed distributions have been a subject of study for a long time due to their numerous applications in various fields, such as economics, natural disasters, signals, and social sciences. In particular, there is extensive research on power-law distributions ($p(x) \propto x^{\alpha}$) and their generalization, regularly varying functions ($\mathcal{RV}_\alpha$), which behave approximately like a power-law in the tail of the distribution.
Although multiple approaches have been developed to study tail behavior in both univariate and multivariate data, as well as in the presence of regressors, many of these studies tend to set an arbitrary threshold or percentile from which the fitting process begins. This can result in a loss of information contained in the body of the distribution. On the other hand, some research uses all observed data to estimate heavy-tailed densities, particularly under Bayesian approaches. However, these models tend to be complex to handle, especially when model selection is required.
This project has two main objectives. The first is to propose Bayesian model selection in flexible regression models for heavy-tailed distributions $\mathcal{RV}_\alpha$, using a simple yet flexible model such as the Gaussian mixture model under a dependent Dirichlet process (DDP-GMM), in the logarithmic space of the observations, where $\mathcal{RV}_\alpha$ distributions become light-tailed. This approach facilitates model selection through a Spike and Slab methodology, as it allows for the analytical computation of the marginal likelihood.
The second objective is to develop a model selection strategy using flexible regression for heavy-tailed $\mathcal{RV}_\alpha$ data. To achieve this, a Bayesian quantile regression will be proposed for both low and high percentiles, with errors distributed according to an asymmetric Laplace mixture under a normalized generalized gamma (NGG) process on the scale parameters. A Spike and Slab methodology will be employed for model selection, enabling the analysis of relevant regressors for the quantiles in the tails of the distribution.
The goal of this seminar is to present the fundamental tools for formulating questions related to the study of specific geometric properties of K3 surfaces (e.g., such as automorphisms, elliptic fibrations, Shioda-Inose structures). Furthermore, it aims to explore the analogue of these tools in the context of higher-dimensional varieties, known as irreducible holomorphic symplectic (IHS) manifolds or Hyperkähler.
1. Hirzebruch-Jung continued fractions
1.1. Basics
1.2. Wahl chains
1.3. Zero continued fractions
2. Singular and nonsingular algebraic surfaces
2.1. Generalities on surfaces and singularities
2.2. Cyclic quotient singularities
2.3. T-singularities
3. Deformations
3.1. General basic theory for affine and proper varieties
3.2. Q-Gorenstein deformations
3.3. Kollár–Shepherd-Barron correspondence
4. W-surfaces
4.1. Picard group, class group, and topology
4.2. MMP for W-surfaces I
4.3. MMP for W-surfaces II
5. N-resolutions
5.1. Existence and uniqueness
5.2. Braid group action
6. Exceptional collections of Hacking bundles
6.1. Hacking exceptional bundles
6.2. Hacking exceptional collections
6.3. Exceptional collections and H.e.c.s
Wednesday, 5 March 2025, 17:15-19:00 (UTC+1)
A HIGH-TEMPERATURE PHASE TRANSITION FROM NUMBER THEORY Let S be the semidirect product of the multiplicative positive integers acting on the integers, with the operation (a,m)(b,n) = (ab,bm+n), where a and b are positive. In previous joint work with Astrid an Huef and Iain Raeburn, we studied the Toeplitz C*-algebra generated by the left regular representation of S on l^2(S), and showed that the extremal KMS equilibrium states with respect to the natural dynamics, for inverse temperatures above the critical value 1, are parametrized by the point masses on the unit circle. I will talk about what happens for inverse temperatures between 0 and 1. Surprisingly, the system has an unprecedented high-temperature phase transition with extremal KMS states parametrized by averages of point masses at roots of unity of the same primitive order together with Lebesgue measure. The quotients associated to these extremal states embed in the Bost-Connes algebra, and establish a link to the Bost-Connes phase transition with spontaneous symmetry breaking. This is current joint work with Tyler Schulz.https://uw-edu-pl.zoom.us/j/95105055663?pwd=TTIvVkxmMndhaHpqMFUrdm8xbzlHdz09Meeting ID: 951 0505 5663 Passcode: 924338
A HIGH-TEMPERATURE PHASE TRANSITION FROM NUMBER THEORY
Let S be the semidirect product of the multiplicative positive integers acting on the integers, with the operation (a,m)(b,n) = (ab,bm+n), where a and b are positive. In previous joint work with Astrid an Huef and Iain Raeburn, we studied the Toeplitz C*-algebra generated by the left regular representation of S on l^2(S), and showed that the extremal KMS equilibrium states with respect to the natural dynamics, for inverse temperatures above the critical value 1, are parametrized by the point masses on the unit circle. I will talk about what happens for inverse temperatures between 0 and 1. Surprisingly, the system has an unprecedented high-temperature phase transition with extremal KMS states parametrized by averages of point masses at roots of unity of the same primitive order together with Lebesgue measure. The quotients associated to these extremal states embed in the Bost-Connes algebra, and establish a link to the Bost-Connes phase transition with spontaneous symmetry breaking. This is current joint work with Tyler Schulz.https://uw-edu-pl.zoom.us/j/95105055663?pwd=TTIvVkxmMndhaHpqMFUrdm8xbzlHdz09Meeting ID: 951 0505 5663 Passcode: 924338
The Organizers:
Paul F. Baum, Francesco D'Andrea, Ludwik D?browski, Søren Eilers, Piotr M. Hajac, Frédéric Latrémolière, Tomasz Maszczyk, Ryszard Nest, Marc A. Rieffel, Andrzej Sitarz, Wojciech Szyma?ski, Adam Wegert