Agileengine data engineer (senior)
agileengine is an inc. company that creates award-winning software for fortune 500 brands and trailblazing startups across 17+ industries.
as a data engineer, you will build a cloud-native data platform on aws from the ground up—moving raw data through bronze, silver, and gold layers into clean, trusted, analytics-ready datasets that directly power business decisions.
you will work closely with the data architect and head of data on pipeline design, data quality, governance, and delivery.
responsibilities
build and maintain scalable etl and elt pipelines across aws services.
implement medallion architecture for data ingestion, transformation, and delivery.
collaborate with data architects on data modeling and schema design.
develop ingestion frameworks for structured, semi-structured, and streaming data.
integrate data quality, lineage, and observability into pipelines.
work with analytics and business teams to deliver consistent and well-documented data.
write clean and testable python and sql code following best practices.
support data governance and security standards including compliance requirements.
monitor pipeline performance, troubleshoot issues, and optimize scalability and cost efficiency.
required experience
5+ years of experience as a data engineer working with aws data ecosystems.
strong experience with aws services (s3, glue, lambda, kinesis, redshift, athena, step functions).
proficiency in python, pyspark, and sql for data transformation.
strong understanding of etl design patterns, batch and streaming data processing.
knowledge of data modeling (star schema, snowflake schema, incremental processing).
experience with orchestration tools (airflow, step functions, dbt).
understanding of data governance, data quality frameworks, and ci/cd for pipelines.
experience working in agile, cross-functional environments.
upper-intermediate english level.
benefits
professional growth: mentorship, tech talks, personalized growth roadmaps.
competitive compensation: usd-based pay with education, fitness, and team activity budgets.
exciting projects: modern solutions with fortune 500 and top product companies.
flextime: flexible schedule with remote and office options.
verato senior data engineer
verato, the identity experts for healthcare, is a high-growth healthcare technology company that enables better care everywhere by providing a single source of truth for identity to organizations across the care continuum.
this position is based in mérida, yucatán, méxico.
if you do not live in or around mérida, relocation will be required within 60 days of accepting the position, and verato will provide a relocation bonus of 42,500 mxn pesos to help with your move.
daily responsibilities
develop and maintain data pipelines that power in-product analytics, feed verato's reference datasets, and enable reporting and analytics by internal teams.
ensure continuous operation of the data pipelines supporting existing datasets.
define and promote best-practice data architectures for ingesting data from external suppliers.
stay abreast of industry best practices regarding data architecture, security, privacy, data lakes, and analytics.
required experience
5+ years of related work experience in data engineering and developing data pipelines.
3+ years working with either aws or gcp (verato is a multi-cloud platform).
strong proficiency in python and 3+ years of working with python for data engineering.
experience with sql and cloud data warehouse technologies such as snowflake, bigquery, or redshift.
experience working in software development organizations using agile methodologies and ci/cd processes.
strong oral and written communication skills; fluency in spoken and written english.
ability to work from verato's mérida office; remote work is also available.
preferred qualifications
experience in a pci/hipaa/hitrust/soc2-type 2 compliant environment or similar environment with phi/pii.
experience ingesting data from syndicated data suppliers.
knowledge of relevant aws technologies: s3, glue, emr, athena, dms, redshift, aws airflow.
knowledge of relevant gcp technologies: gcs, pub/sub, dataproc, bigquery, cloud composer.
benefits and salary
salaries: $73,000 – $82,000 usd per month.
working hours: 8 hours per shift.
paid holidays and parental leave above legal requirements, sick leave, company parking, flexible hours, referral program.
health insurance, vision insurance, dental insurance, life insurance.
other benefits: flex-time, remote or office options, team activities.
#j-*-ljbffr