Data engineer
we are seeking an experienced and skilled remote data engineer in mexico to join our dynamic data team.
the ideal candidate should have a strong background in building efficient and scalable data pipelines, designing robust data architectures, and implementing etl/elt processes. The role requires proficiency in t‑sql, pyspark, python, azure synapse analytics, azure data factory, and ssis. The data engineer will play a critical role in ensuring the availability, integrity, and reliability of our data assets, contributing to the success of our data‑driven initiatives.
responsibilities
* design, develop, and maintain end‑to‑end data pipelines, ensuring smooth and efficient data flow from various source systems to target destinations.
* collaborate with cross‑functional teams to gather requirements, understand data needs, and implement data solutions that align with business objectives.
* build and optimize data architectures for storing, processing, and analyzing large volumes of structured and unstructured data.
* implement robust etl/elt processes to transform raw data into clean, usable formats, addressing data quality and consistency concerns.
* work with data stakeholders to define data integration and transformation requirements, and translate them into technical solutions.
* develop and maintain documentation for data pipelines, data models, and architectural designs, ensuring knowledge sharing across the team.
* monitor and troubleshoot data pipelines, addressing performance bottlenecks, data quality issues, and ensuring data accuracy.
* collaborate with the data infrastructure team to ensure data security, compliance, and privacy standards are upheld.
* stay up‑to‑date with industry best practices, emerging technologies, and trends in data engineering, contributing insights to the team’s continuous improvement efforts.
qualifications
* bachelor’s degree in computer science, information technology, or a related field (master’s degree is a plus).
* 6+ years of proven experience as a data engineer or in a similar role, with a strong track record of designing and implementing data pipelines and architectures.
* proficiency in t‑sql, pyspark, python, and etl/elt methodologies, with hands‑on experience in developing complex data transformations.
* expertise in working with azure synapse analytics, with additional experience in azure data factory, azure devops, and azure databricks being nice‑to‑have.
* solid understanding of data warehousing concepts, data modeling, and database design principles.
* experience with ssis (sql server integration services) for etl processes.
* familiarity with spark sql for big data.
* strong problem‑solving skills and the ability to diagnose and resolve issues in data pipelines.
* excellent communication skills to collaborate effectively with both technical and non‑technical stakeholders.
* attention to detail and a commitment to producing high‑quality work.
* proactive attitude, adaptability to change, and willingness to learn and apply new technologies.
* relevant certifications in azure, data engineering, big data, or related fields are a plus.
* must be located in mexico.
#j-18808-ljbffr