This job has expired and no longer accepts applications.

Machine Learning Engineer (LLM, NLP, AI)

AsiaEuropeRemote

5000 - 8300$

We’re now looking for a Machine Learning Engineer with deep expertise in Natural Language Processing (NLP) and Large Language Models (LLMs). This is a hybrid role that blends custom model development with LLM API integration to ship intelligent, production-ready features. You’ll work across the full lifecycle—from preparing training data and fine-tuning models, to designing retrieval pipelines and deploying performant inference systems in the cloud.

Variant Products

What You’ll Do

  • Integrate hosted LLM APIs (e.g. OpenAI, Anthropic) + custom models to support intelligent in-product behavior.

  • Build and fine-tune transformer models using PyTorch, HuggingFace

  • Design and deploy retrieval-augmented generation (RAG) pipelines with vector databases (e.g., pgvector) and graph-based reasoning (e.g., Neo4j).

  • Develop scalable inference systems using vLLM, speculative decoding, and optimized serving techniques.

  • Build modular, production-grade pipelines for training, evaluation, and deployment.

  • Collaborate closely with product, design, and full-stack teams to ship features that bring AI to end users.

  • Own infrastructure around Docker, Cloud Run, and GCP, ensuring speed, reliability, and observability.

What You Bring

  • Strong Python engineering background with clean, tested, and maintainable code.

  • Proven experience building with transformer-based models, including custom training and fine-tuning.

  • Deep familiarity with HuggingFace, PyTorch, tokenization, and evaluation frameworks.

  • Experience integrating and orchestrating LLM APIs (OpenAI, Anthropic) into user-facing products.

  • Understanding of semantic search, vector storage (FAISS, pgvector), and hybrid symbolic-neural approaches.

  • Experience designing or consuming graph-based knowledge systems (e.g., Neo4j, property graphs).

  • Ability to build and debug scalable training and inference systems.

Bonus Points For

  • Hands-on experience with Docker and production deployment on Google Cloud (GKE, Cloud Run).

  • Experience with RLHF, reward models, or reinforcement learning for LLM alignment.

  • Knowledge of document understanding, OCR, or structured PDF parsing.

  • Exposure to monitoring and observability tools (e.g., Prometheus, Grafana, OpenTelemetry).

  • Background in linguistics, semantics, or computational reasoning.

Salary & Contract

  • Compensation: $60,000 — $100,000/year. Yearly bonus based on profit and performance.

  • For the right person, we’re open to crafting a package that goes well beyond — including equity and fast-track opportunities.

  • Paid via deel.com (standard contract).

  • Asia/Pacific or EU timezone.

How to apply?

  1. Apply to vlad@variant.net, include your resume.

  2. We'll invite you for a call with our Lead AI Engineer.

  3. If we see a potential match, you'll be invited to undertake a small challenge, requiring up to 8 hours of your time.

Posted on: 7/16/2025

Variant Group

Variant Group

At Variant Group, we are at the forefront of reimagining day-to-day challenges. Our B2C SaaS products are designed to transform challenging tasks — from crafting a standout resume to generating complex legal contracts — into simplified, engaging processes accessible to all.

Website

See 1 job at Variant Group