Analytics Engineer

PhilippinesRelocationHybridSenior

As an Analytics Engineer at Salmon, you will play a pivotal role in Data Modeling & Transformation (Databricks Silver & Gold Layers). You will work closely with Data Scientists, Engineers, and Business System Analysts to ensure that datasets align with business needs.

Key responsibilities:

Data Modeling & Transformation

  • Design, build, and maintain scalable data models in Databricks silver (curated data) and gold (business-ready data) layers.

  • Define clear data contracts between silver and gold to ensure consistency and reliability.

  • Apply best practices for dimensional modeling (star/snowflake schemas) to support analytics and reporting.

Collaboration & Best Practices

  • Partner with data scientists, platform engineers, and business analysts to ensure gold datasets meet business needs.

  • Follow software engineering practices — version control (Git), CI/CD for data pipelines, code reviews, and testing.

  • Contribute to the development of a shared analytics engineering framework (naming standards, reusable templates, testing frameworks).

ETL/ELT Development

  • Develop and optimize transformation pipelines (PySpark/SQL/Delta Live Tables/Databricks Workflows) to process data from bronze → silver → gold.

  • Implement incremental data processing strategies to minimize compute cost and improve pipeline performance.

  • Ensure data quality checks (validations, anomaly detection, deduplication, SCD handling, etc.) are built into transformations.

Data Quality & Governance

  • Establish and maintain data quality metrics (completeness, accuracy, timeliness) for silver and gold tables.

  • Apply data governance standards — consistent naming conventions, documentation, and tagging across datasets.

  • Collaborate with data platform engineers to enforce lineage and observability.

Business Enablement

  • Work closely with analysts and business stakeholders to understand requirements and translate them into gold-layer datasets.

  • Build reusable, business-friendly datasets that power dashboards, self-service BI tools, and advanced analytics.

  • Maintain documentation (data dictionaries, transformation logic, lineage diagrams).

Performance & Optimization

  • Optimize Databricks SQL queries and Delta Lake performance (Z-ordering, clustering, partitioning).

  • Monitor and tune workloads to control compute spend on silver and gold pipelines.

  • Implement best practices for caching, indexing, and incremental updates.

Requirements:

  • Strong SQL expertise

    • Ability to write complex, performant queries (CTEs, window functions, joins)

    • Experience optimizing queries on large datasets

    • Strong understanding of analytical SQL patterns

  • Hands-on experience with dbt

    • Building and maintaining dbt models (staging, intermediate, marts)

    • Writing reusable macros and Jinja templates

    • Implementing tests, documentation, and exposures

    • Working with dbt version control and CI workflows

  • Data Modeling expertise

    • Strong understanding of dimensional modeling (facts, dimensions, star schemas)

    • Ability to translate business requirements into scalable data models

    • Designing metrics and semantic layers for analytics and BI

    • Experience maintaining a single source of truth for business metrics

  • Analytics Engineering mindset

    • Strong focus on data quality, reliability, and consistency

    • Experience working closely with analysts and business stakeholders

    • Ability to balance technical best practices with business needs

  • Production-ready analytics

    • Experience with data testing, monitoring, and debugging

    • Familiarity with ELT pipelines and modern data stack concepts

    • Comfortable working in Git-based workflows

Nice-to-Have Qualifications

  • Python

    • Writing data transformations or utilities

    • Familiarity with pandas or similar libraries

    • Using Python for data quality checks or automation

  • Databricks

    • Experience working in Databricks notebooks or workflows

    • Understanding of Databricks architecture and job orchestration

  • Apache Spark

    • Experience with Spark SQL or PySpark for large-scale data processing

    • Understanding of distributed data processing concepts

  • Delta Lake

    • Knowledge of Delta Lake tables, ACID transactions, and time travel

    • Experience designing reliable, incremental data pipelines

  • Cloud data platforms

    • Experience with modern cloud data environments (Databricks, AWS / GCP / Azure)

    • Familiarity with data lakes and lakehouse architectures

Bonus

  • Experience supporting BI tools (Looker, Tableau, Power BI, etc.)

  • Strong documentation and communication skills

  • Experience scaling analytics in fast-growing environments

What we offer

  • Passionate international team spanning the globe

  • Rapid professional growth. Merit (and merit only) rules the day

  • Reward for performance and long-term success of Salmon

  • Fast track to grow internationally

  • New office in Manila, Philippines

  • Relocation support for eligible candidates

  • Remote and hybrid options

  • Medical insurance, health and wellness benefits

  • Program of events and activities both online and in person

Published on: 2/2/2026

Salmon

Salmonverified company badge

Salmon is a next-generation fintech company founded by Pavel Fedorov, George Chesakov, and Raffy Montemayor – visionary leaders with decades of experience in global finance, banking, and technology. Our mission is bold yet simple: to reshape the banking landscape in the Philippines and prove that people deserve better financial services. 

Website

See all 26 jobs at Salmon

Please let Salmon know you found this job on Wantapply.com. It helps us to get more jobs on our site. Thanks!