Hi! I'm Jordan Hoffmann, a PhD student in Applied Mathematics at Harvard University.

The unpredictable and the predetermined unfold together to make everything the way it is.
Tom Stoppard, Arcadia

Research Projects

I have worked on a variety of research projects. Collaborators include: Seth Donoughe, Chris Rycroft (my advisor), and Yohai Bar-Sinai. A few things I have worked on are detailed below.

Here is my GitHub and here is my Google Scholar.

Machine Learning with Crumpled Sheets

With Yohai Bar-Sinai, Shmuel Rubinstein, and Chris Rycroft, we deployed tools from machine learning to try to understand the geometry and predictability of patterns in crumpled sheets.

Hoffmann J*, Bar-Sinai Y*, Lee L, Andrejivic J, Mishra S, Rubinstein SM, and Rycroft CH. (2018). Machine Learning in a data-limited regime: Augmenting experiments with synthetic data uncovers order in crumpled sheets. Science Advances (2019). *equal contributors Science Advances (Open Access) Link Link to code. Click on the link to code to read a description about the project. Note-- for all projects, a more extended description exists on GitHub. :)

Below is a video that briefly highlights what we did in the paper. The video captures things for a high level, but I think it should be pretty informative. The video was made by myself, Shruti, Yohai, and Lisa.

Dragonfly Wing

With Seth Donoughe and Chris Rycroft, I looked at how the complex patterns in dragonfly wings form. For more information see this Twitter thread, this Harvard SEAS article, or this ScienceNews article. Here is a link to the paper. Link to code.

Hoffmann J*, Donoughe S*, Li K, Salcedo M, and Rycroft CH. (2018). A simple developmental model recapitulates complex insect wing venation patterns. PNAS. *equal contributors PNAS Link

Salcedo M*, Hoffmann J*, Donoughe S, and Mahadevan L. (2018). Size, shape and structure of insect wings. bioRxiv. *equal contributors Preprint Link

At APS 2019 I gave a talk on this. A PDF copy of my slides are here.

Protein Folding

During my undergraduate degree, I worked with Drs. Wrabl and Hilser on protein folding. The work led to the publication below:

Hoffmann J*, Wrabl JO*, Hilser VJ. (2016). The role of negative selection in protein evolution revealed through the energetics of the native state ensemble. Proteins: Structure, Function, and Bioinformatics. *equal contributors Link

Other Research/Projects

I have worked on a variety of other projects. These include projects in morphogenesis, machine learning, computational geometry, and astrophysics. Many years ago, I worked on DFT in Fluids. For completed projects, citations are below.

A lot of my thesis is on still unpublished work in developmental biology. In this project I worked with large (5-10 TB) microscopy datasets. From this data, I extracted biological information and came up with an in silico model of development. See teaser below.

Development Gif

Pueyo L, Soummer R, Hoffmann J, et al. (2015). Reconnaissance of the HR 8799 Exosolar System. II. Astrometry and orbital motion. The Astrophysical Journal. ArXiV Link

Hoffmann J, Gillespie D. (2013). Ion correlations in nanofluidic channels: Effects of ion size, valence, and concentration on voltage-and pressure-driven currents. Langmuir. Link

I enjoy playing chess quite a bit. For a final project in Advanced Machine Learning (CS 281) I wrote a Chess Attack engine, and then modified the minimax tree to a ML based approach. See the project poster here. I typically play chess on Lichess, and I wrote a script that can compute your win distance between players. See the code here. I have beat a few people who have beat Magnus, though when I played him I got crushed. Interestingly, in the age of online chess, even very inexperienced players who have played only a dozen of online games are typically within 5 degrees of the World Champion.

I recently wondered after how many moves a chess960 game becomes indistinguishable from a standard game. To try to answer this, I downloaded 20,000 games from the Lichess database (10,000 standard, 10,000 960) and I trained a classifier to try to try to distinguish, given a board representation, if it came from a standard or 960 game. Surprisingly to me, I found that the network was able to get ~75% performance, and (as expected) was very accurate at the start of a game, and approached random at the end (note: standard games tended to be longer). See the following Twitter thread for more information:

I went to HackMIT in 2013 and wrote the app RainMann with my friend Matt. The app won “Best Financial Data Hack” and “Best Use of the Bloomberg API” awards from Kensho and Bloomberg, respectively. RainMann Link

Here is the material from a talk I gave at a Machine Learning group meeting at Harvard. The topic was computer game play. I discussed minimax trees, alpha-beta pruning, TD-gammon, Monte Carlo Tree search, and conv nets.

Here is the material from my CS 205 final project at Harvard with Hallvard Moian Nydal. We used Theano (back in 2014/2015) to implement a sports video classifier. We used 10 video types because we were using a desktop workstation.

Here is a flyer for Computed Futures, a musical performance I am involved in.


I have a few GitHub repositories which are potentially interesting. I detail a few of them below.

Machine Learning + Flat Folding This repository contains a C++ code that simulates flat-folding a sheet at random. Additionally, it has code to perturb Maekawa and Kawasaki's rules.

Image Segmentation This repository contains a python code that segments images using a Level-Set based method.

povray Rendering Code I have been asked a few times for the code I use to render images. Uploaded is a Mathematica script that I use to render convex hulls of objects. Feel free to contact me with any questions regarding the code's use. Additionally, this repository will contain a model of insect development.

Image Wing Analysis This repository analyzes a range of geometric features of shapes in insect wing images.

Lichess Distance Calculator This repository computes the win-distance between two Lichess usernames. Given Magnus' recent online activity, it is pretty cool that the average distance to Magnus is ~4.5.

TensorFlow Demo For the Harvard Machine Learning Supergroup, I put together a TensorFlow demo with Shruti Mishra.

About Me

I am a 5th year PhD student at Harvard. I am interested in many things. Academically, I am very interested in machine learning, high performance computing, data visualization, and geometry. I am also very interested in animals, puzzles, art, and chess.

A slightly more conventional CV can be found here.

During my PhD, I was funded by a Computational Science Graduate Fellowship. You can also read a piece about me here.

My favorite animal: manatee.

My favorite comic: The Far Side.

My favorite TV Show: Stranger Things.

My Pottermore Patronus: A Newfoundland.

My favorite Children's Book: The Eleventh Hour.

Dogs > Cats

Mathematica > MATLAB

vim > emacs

Someday I will write a Conjugate Gradient Preconditioner called L'Oreal, for silky-smooth solutions. Stay tuned!

I wrote a chess engine that I can beat. I wrote a Connect-4 engine that I can not.

Here is my GitHub and here is my Google Scholar.

Videos and Images

Here are a few videos and images I have made that are either useful or I like.

Here is a video showing the morphing process used in the paper I wrote with Seth Donoughe.

I modified a segmentation code I wrote to include multiple centroids that allowed us to track more complicated objects. For example, an experiment where disks of slightly different radius are compressed. Here is a video where I have colored the openings by a phase, where based on their index I add pi/2. Data from the Bertoldi group at Harvard.

More files can be found here: Size, elongation, ID, etc Size, elongation, ID, etc on original shape. Phase Phase on original shape

Here are some result from a fun project I did, inspired by Karl Sims 91 SIGGRAPH paper.

Here is some random computer generated art I did using a Neural Network to predict a mapping from (x,y) to (r,g,b) for each pixel in an 84 by 84 image. For the first video, I trained a network that taken in an (x,y) pixel coordinate and outputs an (r,g,b) value. Once I have a network, I can evaluate it on arbitrary coordinates, including those outside the training region. In the second video, I do something similar, but increase the complexity of the neural network.

I enjoy reading The Riddler and occasionally solve some of the problems. Here are a few of the solutions I made that I like. Note: some are under a secret twitter handle I have used.

Tethering a Goat

Eating Cake

Convex Polygons

Eating Fish


You can email me at "echo ude.dravrah.g@nnamffohj|rev".