I am Théo Morales, a third-year PhD candidate at [Trinity College Dublin](https://tcd.ie) in the School of Computer Science and Statistics. I am passionate about programming, and I have strong interests in Deep Learning, Computer Vision and Nature-Inspired Optimization Algorithms. Specifically, the areas of Deep Learning that I find fascinating are (among others): *meta-learning, representation learning, self-supervised learning, generative models,* and *test-time adaptation*.
![Myself in the Metaverse](me_AR.jpg)
- [LinkedIn](https://www.linkedin.com/in/theomorales/)
- [GitHub](https://github.com/DubiousCactus)
- [Google Scholar](https://scholar.google.com/citations?user=S0hZCVEAAAAJ&hl=en)
- [Research Gate](https://www.researchgate.net/profile/Theo-Morales)
****
## What is this about?
Welcome to my Obsidian Vault on *Learning to Learn*, where I share my notes along my PhD journey. My research is focused on the application of Meta-Learning methods for fast adaptation and accelerated learning of Hand-Object Interaction understanding. Check out my main entries:
- [[A Gentle Introduction to Meta-Learning]]
- [[Gaussian Processes from A to Z]]
- [[The Neural Process Family]]
- [[The Transformer]]
- [[From Diametrically Opposed Ideas to Artificial Intelligence]]
This PhD thesis is focused on improving the robustness of joint Hand-Object Pose Estimation to unseen objects via Test-Time Adaptation. The motivation for this research stems from the fact that Machine Learning algorithms are yet incapable of reaching human levels when it comes to learning new concepts quickly and with few examples. Since the problem of hand-object pose estimation is highly complex, simplifying it by defining sub-tasks as object-specific pose estimation can lead to better performance. See my notes on [[Accelerated Learning in the Context of Hand-Object Interaction]].
Of course there are many challenges in the way. Some that are specific to hand and object pose estimation, such as self-occlusions, mesh inter-penetration or non-realistic grasps and poses. Others that are linked to learning new tasks with few examples, such as overfitting.
The main objectives of this research are about understanding the advantages and limitations of meta-learning methods in the few-shot adaptation scenario, and improving the state-of-the-art, especially for regression tasks in computer vision. And the finality is to improve the generalization of HOPE with online adaptation, and to show that neural networks that can adapt to novel tasks are more effective in the real world.
## What I do in my free time
I like to stay active, and when I'm not riding my fixie in Dublin, running or climbing in a bouldering gym, you will probably find me reading or writing code.
Did I mention my passion for programming? It started a long time ago, when I first had access to this wonderful thing called the Internet. I was around 12 years old at that time, and although I have a few zips lying somewhere, I won't share my first programs because it could cause serious eye damage... So here's a quick list of some of the projects I'm proud of or not ashamed of demoing:
### How does PyTorch work?
*A tiny neural network library with a reverse-mode automatic differentiation engine around NumPy arrays.*
⇾ [GitHub Project](https://github.com/DubiousCactus/alumette)
![Alumette is a tiny PyTorch clone written in Python using Numpy arrays.](alumette_logo.png)
Alumette (match in French, as in _a tiny torch_) is a tiny neural network library with a reverse-mode automatic differentiation engine. It is roughly based on Karpathy's micrograd, but it aims to be a little more usable by wrapping NumPy arrays around Tensors and implementing other Tensor optimization gadgets.
#### Why?
There have been a few very confusing times during which I could not get my PyTorch module to optimize. The gradients would explode, the loss would blow up, or all sorts of weird things would happen during my backward pass. I believed PyTorch to be this magical framework that would optimize any neural network with any given optimizer. During these times, **I started to feel like I had to understand autograd and backpropagation in practice**, as Andrej Karpathy has well explained on [this Medium article](https://karpathy.medium.com/yes-you-should-understand-backprop-e2f06eab496b). But also because it is super fun to code and a nice freshener for calculus and linear algebra :)
### We're all Gaussian here
*A NumPy-based Gaussian Process optimization mini library for learning purposes.*
⇾ [GitHub Project](https://github.com/DubiousCactus/minigauss)
![Posterior sampling from a Gaussian Process.](mini_gauss_optimal_result.png)
Minigauss is a tiny Gaussian Process Python library based on `NumPy`. It is written in a very modular way, such that implementing new mean and covariance parametric functions is easy as cake!
I wrote it to get a deeper understanding of Gaussian Processes, which led to the writing of one of my main blog entries [[Gaussian Processes from A to Z]].
### It's genetics, man!
⇾ [GitHub Project](https://github.com/DubiousCactus/GeneticPainter)
![The Mona Lisa in 150 triangles.](mona_gen.png)
![Accelerated evolution.](genetic_lisa.gif)
Straight-forward C implementation of a genetic algorithm with crossover and mutation, using OpenGL to render each individual as an overlap of triangles.
The main goal of this project was to experiment with different parameters and find a good way to encode pictures into DNA strings, therefore the performance is not optimal (no multi-threading), but still decent.
---
⇾ [GitHub Project](https://github.com/DubiousCactus/GeneticAlgorithm)
![The genetic algorithm solving the traveling salesman problem in your terminal.](GeneticAlgo.png)
This project solves the Travelling Salesman Problem using a genetic algorithm with crossover and mutation of chromosomes. The data is visualized as a graph, using **ncurses** for the nice _old school_ vibe !
### Computer Vision from scratch
*A Minimal dependency Computer Vision & Image Processing library for learning purposes.*
⇾ [GitHub Project](https://github.com/DubiousCactus/retina)
![Retina - ORB Algorithm keypoints on a video of myself doing a pop shove it.](retina.png)
Retina is a zero-dependency (almost) computer vision library built from scratch in C++17 (or trying to be). The only required dependency is FFmpeg (libavutil, libavformat, libavcodec, libswscale) for the video decoding bit. It was written to be easily readable (yes, I'm looking at you OpenCV) and understandable, while trying to achieve decent performance. Technically, it could be built for embedded platforms as well if you don't need video decoding.
### DreamGazer: a hackathon experiment
⇾ [GitHub Project Page](https://DubiousCactus.github.io/DreamGazer/)
![DreamGazer](https://DubiousCactus.github.io/DreamGazer/dreamgazer.png)
This project was created during AUHack 2018. Dream Gazer is an Art experiment: gaze at pictures and admire the uncanny outcome you just created.
Google’s DeepDream neural network has been around for quite some time now. Although it creates fantastic psychedelic images, it still lacks the best part of human dreaming: the rapid eye movements during the so-called REM sleep. So why not combine human eye interaction with DeepDream somehow ?
### A Cheap Chip-8 virtual machine
⇾ [GitHub Project](https://github.com/DubiousCactus/Cheap8)
[![asciicast](https://asciinema.org/a/194611.png)](https://asciinema.org/a/194611)
Writing virtual machines is a lot of fun! Cheap8 uses nothing but the ncurses library to provide old-school terminal graphics!
<!-- Cloudflare Web Analytics --><script defer src='https://static.cloudflareinsights.com/beacon.min.js' data-cf-beacon='{"token": "5c42e8d65b9a419ba1dee5bcc8309b7e"}'></script><!-- End Cloudflare Web Analytics -->