Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Duration: 2–4 months (project-based)
Type: Contract / Research Collaboration (Paid)
About the Project
We are looking for a Master’s or PhD student to work on fine-tuning large language models (LLMs) for domain-specific tasks. The goal is to take an existing pretrained model (e.g., Meta AI’s LLaMA-class models or similar) and specialize it for a narrow, high-value use case using efficient fine-tuning techniques.
This is a hands-on applied project designed for someone who wants real-world experience deploying and optimising LLM systems.
Help drive the next wave of applied AI by demonstrating how fine-tuned LLMs can unlock advanced, real-world use cases beyond general-purpose foundation models. Organizations that require domain-specific accuracy, self-hosted deployments, customisable workflows, or performance beyond out-of-the-box capabilities increasingly rely on fine-tuned models to meet those needs.
Through this project, you will contribute to building specialised AI systems that deliver improved accuracy, efficiency, and control compared to out-of-the-box models. You will also help bridge the gap between academic knowledge and real-world application by applying fine-tuning techniques to solve concrete business problems.
What You’ll Work On
- Fine-tuning pre-trained LLMs on small to medium datasets (500–20k examples)
- Implementing parameter-efficient fine-tuning (e.g., LoRA-style methods)
- Optimising training for cost and performance
- Running experiments on GPU cloud infrastructure
- Evaluating model performance and tradeoffs (specialisation vs generalisation)
- Deploying fine-tuned models for inference
- Strong Python skills
- Experience with deep learning frameworks: PyTorch (preferred) or TensorFlow
- Experience with Hugging Face Transformers or similar ecosystems
- Hands-on experience training or fine-tuning transformer models on GPUs (local or cloud-based)
- Previous experience using cloud platforms for model training or deployment (e.g., AWS, GCP, Azure, RunPod or similar GPU providers)
- Experience working with or fine-tuning open-weight LLM families (Gemma-3, Qwen-3.5, Llama 4, GPT-OSS, Mistral...)
- Hands-on experience with LoRA
- Fine-tuning vs pretraining
- Overfitting and generalization
- Model evaluation
- Strong business awareness: ability to understand the context of the fine-tuning task and translate domain requirements into clear modeling objectives
- MSc or PhD student in Computer Science, Machine Learning, AI, or related field
- Alternatively, 6 months of hands-on experience training and fine-tuning deep learning models
- Has worked on LLMs in research or industry
- Has fine-tuned at least one transformer model
- Comfortable working independently
- Interested in applied AI and real-world constraints (cost, latency, memory)
- Real-world experience fine-tuning large models (30B–100B parameter class)
- Exposure to production constraints and deployment
- Opportunity to co-author technical writeups if applicable
- Strong applied portfolio project
- 100% Remote Work: Work from anywhere with flexibility and autonomy
- Dynamic, High-Impact Projects: Work on cutting-edge ML and GenAI solutions across diverse industries
- International Clients: Collaborate with global organizations and solve real-world challenges at scale
- Urban Sports Club Membership: Supporting your physical and mental wellbeing
- Monthly Bolt Credits: For rides
- Company Events & Offsites: Regular team gatherings to connect, collaborate, and celebrate
Key Skills
Ranked by relevanceReady to apply?
Join TensorOps and take your career to the next level!
Application takes less than 5 minutes

