## job description# senior data engineer (snowflake, python, etl) | hybrid in gdlwe are seeking a *lead data engineer* with strong expertise in *snowflake*, *python*, and *etl development* to join our mexico delivery center. This role is responsible for leading the design and delivery of scalable data architectures and pipelines, ensuring quality, performance, and alignment with client business goals. The ideal candidate combines deep technical knowledge with leadership and mentoring capabilities.* lead the *design, development, and optimization* of end-to-end data pipelines using *snowflake* and *python*.* architect and implement *etl/elt processes* for large-scale data ingestion, transformation, and integration.* define and enforce *data modeling*, *naming conventions*, and *coding standards* across projects.* collaborate with architects, analysts, and business stakeholders to translate requirements into technical designs.* drive *performance tuning*, *scalability*, and *cost optimization* in snowflake environments.* develop reusable frameworks, templates, and automation scripts to accelerate delivery.* mentor and guide mid-level and junior engineers, fostering technical growth and best practices.* partner with devops teams to implement *ci/cd pipelines*, data quality validation, and automated testing.* support data governance and compliance efforts, ensuring secure and auditable data operations.* solid understanding of *data modeling* (star/snowflake schemas) and *data warehouse design*.* excellent communication skills and ability to lead technical discussions with global stakeholders.* english proficiency (b2+ level) for client-facing collaboration.- *5+ years of experience* in data engineering or data architecture roles.- proven expertise with *snowflake* (data warehouse architecture, query optimization, role-based security).- strong proficiency in *python* for data transformation, automation, and integration scripting.- deep experience with *etl/elt tools* (azure data factory, airflow, informatica, or similar).- advanced *sql* skills for large-scale analytics and performance tuning.- experience implementing solutions in *cloud platforms* (azure, aws, or gcp).- familiarity with *databricks*, *azure synapse*, or *bigquery*.- experience with *data observability*, *data lineage*, and *metadata management tools*.- knowledge of *containerization* (docker, kubernetes) and *infrastructure as code* (terraform).
#j-*-ljbffr