Join to apply for the senior data architect role at spin.
design and define the data & ai platform architecture that supports data collection, transformation, and visualization, ensuring the evolution of data infrastructure over the data lakehouse to support business needs.
main responsibilities
1. design, deploy, and maintain the organization’s official data platform, defining a roadmap for its capabilities.
2. develop and document the evolution roadmap for data architecture, coordinating with technical teams to gather improvement requirements.
3. disseminate standards and guidelines for data architecture in projects.
4. define the framework, standards, and principles of the data architecture platform, including modeling, metadata, security, backup, and reference data.
5. regularly update data guidelines and standards, promoting them to ensure a unified data design approach.
6. identify opportunities to improve data ingestion, evaluating and proposing new technologies based on market best practices, including real-time and near-real-time data flows.
7. define roles and responsibilities within data architecture and the interaction model between roles to ensure proper data lifecycle management.
8. collaborate with data engineering, dataops, data modeling, data governance, and it teams to implement and maintain data pipelines supporting dataops practices.
9. design comprehensive solutions such as data marketplace and data sharing initiatives.
10. work autonomously, taking initiative and ownership of projects.
11. promote a positive work environment, embodying the company's values.
12. identify and implement process improvements to enhance efficiency.
13. build trust-based relationships with user areas.
required knowledge and experience
* significant experience in similar roles.
advanced knowledge
* bachelor's or master's in computer science, it, or related field.
* 10+ years designing data lakes on cloud platforms or big data ecosystems, preferably in retail.
* 10+ years working with aws/gcp/azure cloud services, including real-time event processing.
* 6+ years with databricks platform focusing on unity catalog, mlops, dataops, and advanced analytics.
* knowledge of data integration tools and technologies for ingesting data from various sources (etl, api, etc.).
* experience with event hubs like apache kafka and cloud-native event tools.
* understanding of data security and governance best practices.
familiarity with:
* agile methodologies (scrum, kanban).
* aws and gcp infrastructure.
* networking infrastructure.
* unity catalog and api gateway.
seniority level
* mid-senior level
employment type
* full-time
job function
* information technology
#j-18808-ljbffr