*required qualifications*
- bachelor's degree in computer science, engineering, or related field
- previous experience in data engineering, with a focus on extract and load stages of the pipeline
- experience with gcp, dbt, dagster, bigquery, cloud functions, adverity, cloud storage, pub/sub, cloud build, cloud logging, airtable, and github
- strong knowledge of advanced sql, python, java script and data modeling principles
- experience with data quality and validation checks
- experience with data pipeline optimization for scalability, reliability, and performance
- hands-on experience with structured and un-structured database design
- experience in digital marketing data sets, including google ads, microsoft ads, facebook ads, pinterest ads, tiktok ads, thetradedesk, and google analytics is a plus
- experience creating and maintaining apis a plus
*skills*:
- strong problem-solving skills and ability to troubleshoot complex issues
- excellent communication and collaboration skills
- able to quickly learn new systems and software programs with mínimal training,
- documentation or guidance
- works well in a team environment, as well as independently
*responsibilities*
- design, develop, and implement pipelines and integrations to connect to various client databases, as well as ingesting into a single base for our cross-client, agency-wide database
- build and maintain various dags for orchestration/scheduling
- develop and implement data quality and validation checks
- optimize data pipelines for scalability, reliability, and performance
- collaborate with analysts and analytic engineer to ensure data is properly transformed and loaded into bigquery
- work with other teams to define requirements for data pipelines and integrations
- monitor and troubleshoot data pipelines for errors and performance issues
- implement security and compliance best practices to protect our data
- maintain and enhance the existing technology stack
- develop, enhance, and maintain end-user computing or semi-automated tools using tools such as salesforce datorama, adverity, python, sql, google bigquery, airtable, google sheets, excel, looker studio, and tableau or other relevant bi tools
- document data dictionary, and data model that supports the enterprise data lake
- maintain reference lists, business rules, and other data documentation
- maintain high standards of software quality via code standardization, code review, testing, deployment automation, and tooling
- respond to support requests and troubleshoot data and pipeline issues
tipo de puesto: tiempo completo
salario: $25,000.00 - $30,000.00 al mes
horario:
- lunes a viernes
- turno de 8 horas
prestaciones:
- horarios flexibles
- opción a contrato indefinido
experiência:
- sql: 2 años (deseable)
- python: 2 años (deseable)
- javascript: 2 años (deseable)
idioma:
- inglés fluido (obligatorio)
lugar de trabajo: remoto híbrido en 66220, san pedro garza garcía, n. L.