>

Generative Modeling: Diffusion vs Normalizing Flows

Diffusion has come to dominate the field of audio-visual media generation, but Normalizing Flows have their own strengths. In this article, I explore the difference between these models, demonstrate that difference through a small experiment, discuss some promising recent developments in the use of flows for media generation tasks, and explore an interesting bridge that connects these two models.

February 15, 2026 · 12 min

Neural ODEs

Neural ODEs are a relatively niche deep learning architecture designed to represent continuous-time differential processes. In this post, I provide an introduction to the basics of Neural ODEs and two simple applications to demonstrate their use.

February 3, 2026 · 11 min

A Physics View of Function Optimization

An exploration of how the common problem of function optimization over $\mathbb{R}^d$ can be viewed through the lens of physics, and in particular, Hamiltonian mechanics. This perspective is taken by many modern papers on optimization algorithms, and I attempt to give a brief and accessible introduction to it here.

January 21, 2025 · 6 min

A Summary of "End-to-End Differentiable Proving"

Here, I summarize and try to explain in detail what I learned from the paper entitled "End-to-End Differentiable Proving" by Rocktäschel and Riedel.

November 30, 2024 · 11 min

A Summary of "Reasoning With Neural Tensor Networks for Knowledge Base Completion"

Here, I summarize and try to explain in detail what I read and understood in the paper entitled "Reasoning With Neural Tensor Networks for Knowledge Base Completion" by Socher, Chen, Manning, and Ng from Stanford.

November 26, 2024 · 9 min

A Summary of PUTNAMBENCH

A summary of the PUTNAMBENCH paper by Tsoukalas et al. In this paper, the authors formalized hundreds of problems from the William Lowell Putnam Mathematical Competition in order to test the capabilities oe modern neural models in proving theorems in the framework of theorem provers such as Lean 4, Isabelle, and Coq are tested. These frameworks can automatically and rigorously verify the correctness of the proofs provided by the neural models.

November 24, 2024 · 4 min