TensorOps Logo

TensorOps

AI Researcher

Reposted Yesterday
Easy Apply
Remote
Hiring Remotely in USA
Internship
Easy Apply
Remote
Hiring Remotely in USA
Internship
The role involves fine-tuning large language models (LLMs) for specific tasks, optimizing performance, and evaluating models on GPU infrastructure. Ideal for advanced students or those with relevant practical experience.
The summary above was generated by AI

Location: Remote

Duration: 2–4 months (project-based)

Type: Contract / Research Collaboration (Paid)


About the Project

We are looking for a Master’s or PhD student to work on fine-tuning large language models (LLMs) for domain-specific tasks. The goal is to take an existing pretrained model (e.g., Meta AI’s LLaMA-class models or similar) and specialize it for a narrow, high-value use case using efficient fine-tuning techniques.

This is a hands-on applied project designed for someone who wants real-world experience deploying and optimising LLM systems.

Help drive the next wave of applied AI by demonstrating how fine-tuned LLMs can unlock advanced, real-world use cases beyond general-purpose foundation models. Organizations that require domain-specific accuracy, self-hosted deployments, customisable workflows, or performance beyond out-of-the-box capabilities increasingly rely on fine-tuned models to meet those needs.

Through this project, you will contribute to building specialised AI systems that deliver improved accuracy, efficiency, and control compared to out-of-the-box models. You will also help bridge the gap between academic knowledge and real-world application by applying fine-tuning techniques to solve concrete business problems.


What You’ll Work On
  • Fine-tuning pre-trained LLMs on small to medium datasets (500–20k examples)
  • Implementing parameter-efficient fine-tuning (e.g., LoRA-style methods)
  • Optimising training for cost and performance
  • Running experiments on GPU cloud infrastructure
  • Evaluating model performance and tradeoffs (specialisation vs generalisation)
  • Deploying fine-tuned models for inference

Experience
  • Strong Python skills
  • Experience with deep learning frameworks: PyTorch (preferred) or TensorFlow
  • Experience with Hugging Face Transformers or similar ecosystems
  • Hands-on experience training or fine-tuning transformer models on GPUs (local or cloud-based)
  • Previous experience using cloud platforms for model training or deployment (e.g., AWS, GCP, Azure, RunPod or similar GPU providers)
  • Experience working with or fine-tuning open-weight LLM families (Gemma-3, Qwen-3.5, Llama 4, GPT-OSS, Mistral...)
  • Hands-on experience with LoRA

Understanding of:
  • Fine-tuning vs pretraining
  • Overfitting and generalization
  • Model evaluation
  • Strong business awareness: ability to understand the context of the fine-tuning task and translate domain requirements into clear modeling objectives

What you bring
  • MSc or PhD student in Computer Science, Machine Learning, AI, or related field
  • Alternatively, 6 months of hands-on experience training and fine-tuning deep learning models
  • Has worked on LLMs in research or industry
  • Has fine-tuned at least one transformer model
  • Comfortable working independently
  • Interested in applied AI and real-world constraints (cost, latency, memory)

What You’ll Gain

  • Real-world experience fine-tuning large models (30B–100B parameter class)
  • Exposure to production constraints and deployment
  • Opportunity to co-author technical writeups if applicable
  • Strong applied portfolio project
What We Offer
  • 100% Remote Work: Work from anywhere with flexibility and autonomy 
  • Dynamic, High-Impact Projects: Work on cutting-edge ML and GenAI solutions across diverse industries
  • International Clients: Collaborate with global organizations and solve real-world challenges at scale
  • Urban Sports Club Membership: Supporting your physical and mental wellbeing
  • Monthly Bolt Credits: For rides
  • Company Events & Offsites: Regular team gatherings to connect, collaborate, and celebrate

Top Skills

AWS
Azure
GCP
Hugging Face Transformers
Python
PyTorch
TensorFlow

Similar Jobs

4 Days Ago
Remote
USA
140K-180K Annually
Senior level
140K-180K Annually
Senior level
Artificial Intelligence • Machine Learning • Natural Language Processing • Software • Conversational AI
The Talent Researcher role involves building talent maps in AI, ML, and voice technology, translating hiring signals into research plans, and delivering market intelligence to enhance recruitment processes.
Top Skills: Ats WorkflowsLinkedin InsightsLinkedin RecruiterStructured Spreadsheets
17 Days Ago
Remote
United States
178K-198K Annually
Senior level
178K-198K Annually
Senior level
Healthtech • Software
The role involves applying AI/ML techniques to develop model systems for healthcare, collaborating across teams, and conducting R&D.
Top Skills: SparkAWSAzureAzure Data LakeDatabricksGCPHugging Face TransformersJavaLightgbmNumpyPandasPythonPyTorchScikit-LearnScipySQL
18 Days Ago
Remote
United States
230K-275K Annually
Senior level
230K-275K Annually
Senior level
Software
As a Staff Machine Learning Engineer, you will design and evaluate ML models, optimize performance, and work with stakeholders to deliver AI solutions.
Top Skills: KubeflowMlflowPythonPyTorchTensorFlowWeights & Biases

What you need to know about the Seattle Tech Scene

Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.

Key Facts About Seattle Tech

  • Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Amazon, Microsoft, Meta, Google
  • Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Madrona, Fuse, Tola, Maveron
  • Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account