An exploration of AI-driven symbolic regression, implementing and analyzing two key methods: Equation Learner (EQL) neural networks and a Seq2Seq Transformer model.
A from-scratch implementation of a scaled-down GPT-2 model in PyTorch, trained on the Snappfood dataset for sentiment-controlled Persian text generation.
An exploration of self-supervised and contrastive learning techniques (SimSiam) on CIFAR-10 dataset, comparing them against a supervised baseline in a low-data regime.
A from-scratch PyTorch implementation of Low-Rank Adaptation (LoRA) to efficiently fine-tune BERT models for text classification. This project compares the performance and parameter efficiency of LoRA, full fine-tuning, and from-scratch training.
A comprehensive implementation of a Neurosymbolic framework for Visual Question Answering (VQA) on the CLEVR dataset. This project translates natural language questions into symbolic programs using three different learning strategies: Supervised (LSTM & Transformer), Reinforcement Learning (REINFORCE), and In-Context Learning (LLM).