Matematicas X Deep Learning

Este seminario de estudiantes tiene el objetivo de aprender juntos sobre temas de investigación en Deep Learning con componentes matematicas.

Formato seminario estudiantes:
30 min presentacion paper,
+
1 h discusion / brainstorming.
(llevar su computador ayuda!)

Para proponer temas y para manifestar interes, escribir a mpetrache@uc.cl.
Grupo telegram https://t.me/+eybUtFDyPxVkOTc5.





2025-11-21
9:50-11:20hrs.
Bastian Rieck. University of Fribourg
An introduction to topological autoencoders
Google Meet https://meet.google.com/zcm-kzei-nvh
Abstract:
Topological autoencoders are a class of representation-learning models designed to preserve the intrinsic topological structure of data while performing dimensionality reduction. Unlike classical autoencoders, which optimize only for reconstruction accuracy in the ambient space, topological autoencoders incorporate objectives derived from computational topology—typically persistent homology—to ensure that salient global features such as connected components, cycles, and higher-order holes are retained in the latent representation. 

This talk covers an introduction to topological autoencoders, and is an invited lecture to the Topological Data Analysis course IIC3686 of this semester.
2025-11-07
9:40-11:00hrs.
Pawel Dlotko. Dioscuri Center for Topological Data Analysis (Poland)
Topology in Action: Understanding Complex Data Through Shape
virtual por Google Meet: meet.google.com/pbn-wjks-pwr
Abstract:
In this lecture, I will demonstrate how topological methods, both
classical and modern, help us understand the structure and geometry of
complex datasets and phenomena. We will begin with the fundamental
tools of persistent homology and explore their practical applications
in data analysis and scientific modeling. Then we will move to newer
approaches based on Euler characteristic descriptors, concluding with
the Mapper algorithm and its power in visualizing high-dimensional
data. Throughout the lecture, I will present a series of practical
examples showing how topological data analysis (TDA) interacts with
statistics and machine learning, offering new ways to extract meaning,
compare structures, and visualize relationships hidden within data.
2025-10-28
11:00hrs.
Benjamin Blum-Smith. John Hopkins University
Invariant theory and data science
Sala multiuso 1 piso, edificio Villanueva
Abstract:
This talk discusses contact points between the algebraic theory of invariants, and data science applications such as signal processing and machine learning. Invariant theory becomes relevant to data problems when they have built-in symmetry. For example:
 
(i) Machine learning on graphs has inherent symmetry coming from the fact that graphs are represented as adjacency matrices, but node relabeling changes the adjacency matrix without changing the underlying graph.
 
(ii) Signal processing problems related to molecule imaging have inherent symmetry coming from the fact that one often has little or no control over the orientations of the molecules to be imaged.
 
When an application exhibits symmetry, classes of functions relevant to the problem are typically invariant or equivariant with respect to some group action. Invariant theory seeks to describe such classes. We discuss applications of invariant theory to such problems, and also some new questions in invariant theory motivated by these applications. The talk represents joint work with many collaborators, but especially Soledad Villar.
-----------------------------
Keywords: invariants; point clouds; galois theory; generic separation; equivariant learning; degree bounds
2024-10-09
13:30 - 14:50hrs.
Juan Jose Molina. UC Chile
Understanding The Dynamics of The Frequency Bias in Neural Networks - Https://arxiv.org/abs/2405.14957
Sala 5, Facultad de matemáticas
2024-10-02
13:30 - 14:50 (nuevo horario!)hrs.
Billy Peralta. Unab
Expert Gate: Lifelong Learning With a Network of Experts - Https://arxiv.org/pdf/1611.06194
Sala 5, Facultad de Matemáticas
2024-09-25
14:00 - 15:30hrs.
Rafael Elberg. UC Chile
Neural Redshift -- ¿Cuáles sesgos de simplicidad tienen las redes neuronales y cómo modelarlo?
Sala 5, Facultad de Matemáticas UC
Abstract:
https://arxiv.org/pdf/2403.02241
2024-08-28
14:00-15:30hrs.
Pablo Herrera. UC Chile
PINNs: resolviendo ecuaciones con redes neuronales
Sala 5, departamento de matematicas
Abstract:
https://arxiv.org/pdf/1711.10561
2024-08-21
14:00 - 15:30hrs.
Mircea Petrache. UC Chile
Modulos de atencion en los transformers vs "protein language models" como AlphaFold
Sala 5, departamento de matematicas
Abstract:
Vamos a comparar los modulos de atencion en la arquitectura de Transformer, en Multiple Sequence Alignment y en el modulo "evoformer" de AlphaFold.
2024-08-14
14:00 - 15:30hrs.
Felipe Engelberger . Universitat Leipzig - Meiler Lab
AlphaFold - introduccion a la red neuronal que revoluciono la bioquimica
Sala 5, Departamento de Matematicas
Abstract:
https://elanapearl.github.io/blog/2024/the-illustrated-alphafold/
2024-06-12
14:50hrs.
Hugo Carrillo. UC
Algunas cotas de error en la literatura de Physics-Informed Neural Networks (PINNs).
Sala 2
Abstract:

Revisaré algunas cotas del error de generalización para algunas variantes de PINNs, y para algunos tipos de EDPs donde estas variantes se adaptan bien.

2024-06-05
14:50-16:20 (30 min presentacion + 1h discusion)hrs.
Mircea Petrache. UC Chile
Flow Matching for Generative Modelling -- Https://arxiv.org/abs/2210.02747
Sala 2
2024-05-29
14:50 - 16:20 (30 min presentacion + 1h discusion)hrs.
Juan Jose Molina. UC Chile
Infinite Limits of Neural Networks - Https://kempnerinstitute.harvard.edu/research/deeper-Learning/infinite-Limits-Of-Neural-Networks/
Sala 2
2024-05-15
17:10hrs.
Pablo Flores. UC Chile
Physics-Informed Neural Networks -- Https://faculty.sites.iastate.edu/hliu/files/inline-Files/pinn_Rpk_2019_1.pdf
Sala 1
2024-05-08
17:00-18:30 -- 30 min de presentacion + discusion por 1hrs.
German Pizarro. Cenia
A New Approach for Self-Supervised Learning on Images -- Https://arxiv.org/abs/2301.08243
Sala 1 departamento de matematicas
2024-04-24
17:00-18:30 (30 min presentacion + 1h discusion)hrs.
Sebastian Sanchez. UC Chile
Fourier Neural Operator for Parametric Partial Differential Equations -- Https://arxiv.org/pdf/2010.08895.pdf
Sala 1
2024-04-17
17:00-18:30 (30 min presentacion + 1h discusion)hrs.
Rafael Elberg. UC Chile
Clip: Como Conectar Imagenes y Texto de Forma Inteligente -- Paper Base: Https://arxiv.org/abs/2103.00020
Sala 1
2024-04-10
17:00-18:30 (30min presentacion + 1h discusion)hrs.
Nicolas Alvarado. UC Chile
Entrenamiento de Redes Neuronales Hiperbolicas -- Https://www.math.uci.edu/~Jxin/hffnn_Gd_Camc_Oct_2023.pdf
Sala 1
2024-04-03
17:00-18:30 (30min presentacion, 1h discusion)hrs.
Felipe Urrutia. Universidad de Chile
Uso de Teoria de Informacion en Procesamiento de Lenguaje Natural -- Https://tinyurl.com/23Vhc3Z9 -- Https://arxiv.org/pdf/2308.12562.pdf
Sala 1
2024-03-20
17:00 - 18:00hrs.
Mircea Petrache. UC Chile
Repaso inicial: Redes neuronales, entrenamiento, arquitecturas basicas (CNN, GNN, RNN, Transformers, Difusion)
Sala 1
Abstract:
Se recuerdan / introducen las redes neuronales y sus arquitecturas.

Sesion util especialmente para quienes no han visto redes neuronales antes.

La presentacion sera muy rapida y no sustituye un estudio individual, aca van mas materiales introductorios.

Material:

Notas que cubren entrenamiento, CNN y RNN https://arxiv.org/pdf/2304.05133.pdf

Notas un poco mas avanzadas/matematicas, sin enfasis en arquitecturas especiales: https://arxiv.org/abs/2105.04026

Notas sobre GNN (graph neural networks): http://web.stanford.edu/class/cs224w/slides/04-GNN2.pdf

Transformers (intro visual): https://jalammar.github.io/illustrated-transformer/

Difusion (paper de introduccion/survey): https://arxiv.org/pdf/2306.04542.pdf

2024-03-14
17:30-18:30hrs.
Mircea Petrache. UC Chile
Fijamos formato y proponemos temas por abordar durante el semestre
Sala 2
Abstract:
Consideraremos tematicas que mezclen herramientas matematicas con aplicaciones/ideas de aprendizaje profundo.

Una primera idea es que
-- cada sesion tenga 30 min de presentacion de un paper o tema,
-- y sucesivamente prevedemos max 1 hora de discusiones, donde incluso se cubre material complementario y se discuten los detalles del paper.
-- Para cada sesion sirve que al menos 2-3 personas hayan leido el paper y que lo conozcan.

En esta primera ocasion, hablaremos de lo siguiente:
-- si lo de arriba es un buen formato,
-- que temas interesa cubrir (sobre que temas buscamos papers por presentar)
-- ademas nos presentamos entre los interesados en matematicas + deep learning