Deciding how to allocate the seats of a house of representatives is one of the most fundamental problems in the political organization of societies, and has been widely studied over already two centuries. The idea of proportionality is at the core of most approaches to tackle this problem, and this notion is captured by the divisor methods, such as the Jefferson/D'Hondt method. In a seminal work, Balinski and Demange extended the single-dimensional idea of divisor methods to the setting in which the seat allocation is simultaneously determined by two dimensions, and proposed the so-called biproportional apportionment method. The method, currently used in several electoral systems, is however limited to two dimensions and the question of extending it is considered to be an important problem both theoretically and in practice. In this work we initiate the study of multidimensional proportional apportionment. We first formalize a notion of multidimensional proportionality that naturally extends that of Balinski and Demange. By means of analyzing an appropriate integer linear program we are able to prove that, in contrast to the two-dimensional case, the existence of multidimensional proportional apportionments is not guaranteed and deciding its existence is NP-complete. Interestingly, our main result asserts that it is possible to find approximate multidimensional proportional apportionments that deviate from the marginals by a small amount. The proof arises through the lens of discrepancy theory, mainly inspired by the celebrated Beck-Fiala Theorem.We evaluate various methods based of 3-dimensional proportionality, using the data from the recent 2021 Chilean Constitutional Convention election. Besides the classical political and geographical dimensions, this election required the convention to be balanced in gender. The methods we consider are 3-dimensional in spirit but include further characteristics such as plurality constraints and/or minimum quotas for representation.This is joint work with Javier Cembrano, Gonzalo Diaz, and Victor Verdugo. Preliminary versions appeared at EC 2021 and EAAMO 2021.
The hybridizable discontinuous Galerkin (HDG) methods were introduced in the framework of second-order diffusion problems by hybridization and static condensation. We show that the exact solution can be characterized as the solution of local Dirichlet problems (hybridization) which can then be patched together by the transmission conditions (static condensation). Our goal is to show that the HDG methods are nothing but a discrete version of this characterization. To do so, we show that this is also the case for the well-known continuous Galerkin and the mixed methods. We end by sketching how to define HDG methods for general PDEs.
For most wave-based inverse problems the resolution of the reconstruction is usually limited by the so-called diffraction limit, i.e., the smallest features to be reconstructed cannot be smaller than the smallest wavelength of available data. If one properly restricts the class of features to, for example, point-scatterers, the seminal work of Donoho in the early ’90s demonstrates that the recovery of these sub-wavelength features is tractable. However, algorithms to recover a more general class of structured scatterers containing features below the diffraction limit in the presence of noise remain an open question.
In this talk, we aim to surpass the diffraction limit using deep learning techniques coupled with computational harmonic analysis tools. In particular, I will introduce a new neural network architecture for inverting wide-band data to recover acoustic scatterers at resolutions finer than the classical limit. The architecture incorporates insights from the butterfly factorization and the Cooley-Tukey algorithm to explicitly account for the physics of wave propagation. The dimensions of the network seamlessly adapt to the desired image resolution, resulting in a number of trainable weights that scale quasilinearly with the image resolution and the data bandwidth. In addition, the data is optimally assimilated across frequencies thus enhancing the stability of the training stage. I will provide the rationale for such construction and showcase its properties for several classes of scatterers with sub-Nyquist features embedded in a known background media.
(Joint work with Matthew Li and Laurent Demanet).
El objetivo de esta charla es sentar bases generales para una conversación sobre educación matemática en tres campos de problemas: justificaciones, posibilidades e implementación. Para concretizar, hago uso de lo estudiado en mi tesis doctoral, anclada en el contexto chileno de probabilidad y estadística en enseñanza media, orientado a la elusiva noción de ciudadanía crítica.
Investigar justificaciones implica indagar en “por qué” enseñar un contenido en cierto nivel educacional. En particular, trazas del llamado argumento de la competencia crítica están evidenciados en el currículo chileno de probabilidad y estadística.
Estudiar posibilidades significa hacerse preguntas fundamentales sobre “qué” es el saber matemático y cómo se evidencia. En general, me refiero a la epistemología crítica de Skovsmose, y cómo se conecta con marcos teóricos de alfabetización probabilística y estadística.
Los problemas de implementación buscan responder al “cómo” de la educación matemática. En mi tesis, delineo tres principios generales para el diseño de ambientes de aprendizaje: ejemplaridad, enfoque de indagación y pragmatismo.
Finalmente, a modo de recapitulación y comparación, comento sobre mi actual proyecto de investigación acerca de las interconexiones entre competencias matemáticas y pensamiento computacional.