Key responsibilities:
* design, develop, and implement data pipelines and etl processes using python and databricks and/or iics(informatica).
* collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet business needs.
* optimize existing data workflows for performance and scalability.
* monitor and troubleshoot data pipeline issues, ensuring timely resolution.
* document data processes, workflows, and technical specifications.
* stay updated with the latest industry trends and technologies related to data engineering and cloud services.
qualifications
qualifications:
* proven experience in python programming and data manipulation.
* mid-level knowledge of databricks and its ecosystem (spark, delta lake, etc.).
* experience with iics or similar cloud-based data integration tools.
* familiarity with sql and database management systems (e.g., postgresql, mysql, etc.).
* understanding of data warehousing concepts and best practices.
* mid-level problem-solving skills and attention to detail.
* strong communication and collaboration skills.
* experience with cloud platforms (e.g., aws, azure, gcp).
* familiarity with agile development methodologies
#j-18808-ljbffr