FluxML Projects - Summer of Code

Flux usually takes part in Google Summer of Code as a NumFocus organization. We follow the same rules and application guidelines as Julia, so please check there for more information on applying. Below are a set of ideas for potential projects (though you are welcome to explore anything you are interested in).

Flux projects are typically very competitive; we encourage you to get started early, as successful contributors typically have early PRs or working prototypes as part of the application. It is a good idea to simply start contributing via issue discussion and PRs and let a project grow from there; you can take a look at this list of issues for some starter contributions.

Metalhead.jl Developement

Difficulty: Medium (175h)

Expected outcomes: Help us improve Metalhead.jl by

  • adding new models

  • porting pre-trained weights

  • extending the model interfaces to make them more customizable

Skills: Familiarity with vision model architectures and Flux.jl

Mentors: Kyle Daruwalla

FastAI.jl Time Series Development

Difficulty: Medium (350h)

In this project, you will assist the ML community team with building time series methods for FastAI.jl on top of the existing JuliaML + FluxML ecosystem packages. Some familiarity with the following Julia packages is preferred, but it is not required:

Expected outcomes: You will

  • load a working time series dataset using the FastAI.jl data registry

  • create new block methods for time series tasks

  • load at least one working time series model into a learner

  • develop an example tutorial that ties all the previous steps together

Skills: Familiarity with deep learning pipelines, common practices, Flux.jl, and recurrent neural networks

Mentors: Lorenz Ohly, Kyle Daruwalla, Brian Chen

FastAI.jl Text Development

Difficulty: Medium (350h)

In this project, you will assist the ML community team with building text methods for FastAI.jl on top of the existing JuliaML + FluxML ecosystem packages. Some familiarity with the following Julia packages is preferred, but it is not required:

Expected outcomes: You will

  • load a working text dataset using the FastAI.jl data registry

  • create new block methods for textual tasks

  • load at least one working text model into a learner

  • develop an example tutorial that ties all the previous steps together

Skills: Familiarity with deep learning pipelines, common practices, Flux.jl, and JuliaText

Mentors: Lorenz Ohly, Kyle Daruwalla, Brian Chen

Differentiable Computer Vision

Difficulty: Hard (350h)

Create a library of utility functions that can consume Julia's imaging libraries to make them differentiable. With Zygote.jl, we have the platform to take a general purpose package and apply automatic differentiation to it.

Expected outcomes: You will

  • write AD rules for functions in existing computer vision libraries

  • demonstrate the use of these newly differentiable libraries for tasks such as homography regression

Skills: Familiarity with automatic differentiation, deep learning, and defining (a lot of) custom adjoints

Mentors: Dhairya Gandhi

FermiNets: Generative Synthesis for Automating the Choice of Neural Architectures

Difficulty: Hard (175h)

The application of machine learning requires a practitioner to understand how to optimize a neural architecture for a given problem, or does it? Recently, techniques in automated machine learning, also known as AutoML, have dropped this requirement by allowing for good architectures to be found automatically. One such method is the FermiNet, which employs generative synthesis to give a neural architecture which respects certain operational requirements.

Expected outcomes: The goal of this project is to implement the FermiNet in Flux to allow for automated synthesis of neural networks.

Mentors: Chris Rackauckas and Dhairya Gandhi

Differentiable Rendering

Difficulty: Hard (350h+)

We have an existing package, RayTracer.jl, which is motivated by OpenDR and exists to do differentiable raytracing with Flux.jl and Zygote.jl.

Expected outcomes: You will

  • implement at least 2 alternative rendering models like NeRF, VolSDF, Neural Raytracing, etc.

  • make improvements to RayTracer.jl to use the latest Flux libraries

  • update RayTracer.jl for ChainRules.jl

Skills: GPU programming, deep learning, familiarity with the literature, familiarity with defining custom adjoints

Mentors: Dhairya Gandhi, Avik Pal, Julian Samaroo