Ingersoll rand is committed to achieving workforce diversity reflective of our communities. We are an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.
Role summary
enable and scale ingersoll rand’s genai program by designing, building, and operating the production infrastructure that powers ai-driven applications across the enterprise. This role focuses ondevops, cloud infrastructure, ci/cd, observability, and platform reliability for genai systems built onllm apis and snowflake-native capabilities.
Own the operational lifecycle of llm-powered systems including prompt versioning, model configuration, cost controls, and production reliability across snowflake-native and api-based genai platforms.
You will work closely with ai engineers and application developers to turn prototypes intosecure, reliable, observable, and scalable ai applications, ensuring smooth integration with enterprise systems and data platforms. This is a devops and platform engineering role with a strong focus on production-grade ai systems.
The core challenge
genai teams can build powerful applications quickly using llm apis—but productionizing them at enterprise scale is hard. Challenges include environment consistency, secure data access, observability, cost control, ci/cd automation, and reliable integrations with core business systems.
This role bridges that gap by providingstandardized infrastructure, deployment pipelines, and operational frameworks so ai teams can move fast without sacrificing reliability, security, or governance.
Key responsibilities
genai platform & infrastructure
design, build, and maintain cloud infrastructure to host genai applications usinggcp and snowflake container services
support snowflake-based ai workflows includingdata ingestion, cortex agents, analyst, and search
define standardized, reusable infrastructure patterns for ai applications across development, staging, and production environments
implement cost-aware infrastructure patterns (warehouse sizing, service isolation, token budgeting) for genai workloads
explore, build, and support proof‑of‑concept initiatives to evaluate emerging genai and mlops platforms and architectures, focusing on deployment, orchestration, monitoring, and governance of llm-based systems.
Ci/cd & automation
build and maintainci/cd pipelines using github for ai applications and platform services
automate infrastructure provisioning and environment configuration using infrastructure-as-code
enable safe, repeatable deployments with versioning, rollback, and environment promotion strategies
observability & reliability
implement observability for genai systems usinglangfuse and snowflake observability tools to continuously improve ai system reliability and usefulness.
Monitor application health, latency, usage, errors, and cost using dashboards, alerts, and runbooks to support reliable production operations.
Cloud & container operations
manage containerized workloads acrossgcp and snowflake containers
ensure secure networking, secrets management, access controls, and environment isolation
optimize performance, scalability, and cost for ai application workloads
enterprise integrations
support and operationalize integrations between genai applications and enterprise systems such assap, salesforce, sharepoint, and other internal/external platforms
ensure reliability, security, and observability of api‑based and event‑driven integrations
partner closely with ai engineers, data engineers, and it teams to remove operational blockers
provide documentation, templates, and best practices that enable teams to deploy and operate independently
contribute to standards for security, reliability, and governance across the genai platform
required qualifications
3+ years in devops, platform engineering, or software infrastructure roles; 1‑2+ years specifically with ml/ai infrastructure or mlops
experience operating llm‑based applications in production, including prompt management, cost monitoring, and reliability practices
strong experience withci/cd pipelines (github actions preferred)
hands‑on experience withcontainerized applications (docker; kubernetes or managed container platforms)
experience operating workloads ongcp or similar cloud platforms
proficiency withinfrastructure‑as‑code tools (terraform or equivalent)
strong scripting skills (python and/or bash)
experience implementingmonitoring, logging, and observability for production systems
experience supportingapi‑based applications and integrations
ability to troubleshoot and operate complex distributed systems
strong communication skills and ability to collaborate across technical and business teams
fluent in english (written and spoken)
bachelor’s or master’s degree in computer science, software engineering, it, or related field (or equivalent experience)
preferred qualifications
experience withsnowflake, including data ingestion pipelines and snowflake‑native applications
familiarity withgenai application architectures (rag, agents, prompt orchestration, api‑based llm usage)
experience withlangfuse or similar ai observability tools
experience with data versioning tools (dvc, pachyderm, lakefs)
knowledge of vector databases and llm infrastructure (pinecone, weaviate, milvus, chroma)
cloud or mlops certifications (aws machine learning specialty, aws solutions architect, kubernetes cka/ckad, azure ai engineer, gcp ml engineer)
manufacturing or industrial iot experience
experience with compliance and governance frameworks for ai/ml systems
what this role is
infrastructure engineer who enables ai teams to move faster through automation and robust tooling
systems thinker who balances reliability, scalability, and cost efficiency
bridge between ai innovation and production operations who translates complex requirements into practical solutions
continuous learner who keeps current with rapidly evolving ai‑ops ecosystem and cloud‑native technologies
ingersoll rand inc. (nyse:ir), driven by an entrepreneurial spirit and ownership mindset, is dedicated to helping make life better for our employees, customers and communities. Customers lean on us for our technology‑driven excellence in mission‑critical flow creation and industrial solutions across 40+ respected brands where our products and services excel in the most complex and harsh conditions. Our employees develop customers for life through their daily commitment to expertise, productivity and efficiency. For more information, visit www.irco.com.
Special accommodation
if you are a person with a disability and need assistance applying for a job, please submit a request.
Lean on us to help you make life better
we think and act like owners.
We are committed to making our customers successful.
We are bold in our aspirations while moving forward with humility and integrity.
We foster inspired teams.
#j-18808-ljbffr