Customer DevOps Engineer

Sayari

About Sayari: 
Sayari is the counterparty and supply chain risk intelligence provider trusted by government agencies, multinational corporations, and financial institutions. Its intuitive network analysis platform surfaces hidden risk through integrated corporate ownership, supply chain, trade transaction and risk intelligence data from over 250 jurisdictions. Sayari is headquartered in Washington, D.C., and its solutions are used by thousands of frontline analysts in over 35 countries.
Our company culture is defined by a dedication to our mission of using open data to enhance visibility into global commercial and financial networks, a passion for finding novel approaches to complex problems, and an understanding that diverse perspectives create optimal outcomes. We embrace cross-team collaboration, encourage training and learning opportunities, and reward initiative and innovation. If you like working with supportive, high-performing, and curious teams, Sayari is the place for you.
POSITION DESCRIPTION
Sayari’s Infrastructure team is growing and we’re seeking a highly skilled Customer DevOps Engineer who will play a crucial role in designing, implementing, and maintaining our cloud infrastructure for some of our largest customers. This role involves driving the automation of our deployment processes, ensuring that our systems are scalable, secure, and continuously integrated. You will bring your expertise in both cloud engineering and DevOps practices to enhance the efficiency and reliability of our technology stack. Your vision should be to design a robust platform that empowers the Product and Data teams and ensures seamless delivery of high-quality products on resilient infrastructure.
Job Responsibilities
  • Serve as the key technical advisor for our customers, building lasting relationships and offering expert support.
  • Guide customers through self-hosted product deployments across various cloud platforms (AWS, GCP, Azure) and using different tools (Kubernetes, AWS ECR, Helm).
  • Design, implement, and manage scalable, secure, and cost-effective cloud infrastructure across IaaS providers like Azure, GCP and AWS.
  • Develop, maintain, and improve Infrastructure as Code (IaC) using tools such as Terraform to automate cloud resource provisioning and management.
  • Implement and manage Kubernetes-based environments, including containerized applications deployed with Helm and resolve issues when they occur.
  • Build and improve automated testing, security scanning, and monitoring of CI/CD within the customers environment.
  • Develop and maintain automation scripts and tools to streamline cloud operations, DevOps processes, and internal workflows, enhancing productivity across teams.
  • Partner with the customers security team to implement cloud security best practices, including IAM, encryption, and network security.
  • Support data engineers in maintaining and optimizing data tools, pipelines, and workflows for efficient data processing.
  • Implement comprehensive monitoring solutions and automate backup processes and failover mechanisms.
  • Create and maintain detailed documentation and diagrams.
  • Required Skills & Experience:
  • 7+ years of experience in a Cloud DevOps Engineering or a similar role 
  • Working knowledge and comfortability with Microsoft Azure – this will be your initial focus
  • Extensive experience with GCP and/or AWS 
  • Excellent written and verbal communication skills, able to articulate technical concepts and solutions effectively,  both with internal stakeholders and our customer’s key stakeholders
  • Proficiency configuring and managing Kubernetes clusters
  • Proficiency developing infrastructure as code using tools such as Terraform
  • Excellent problem-solving skills with the ability to diagnose and resolve complex issues related to cloud infrastructure, deployments, and DevOps processes
  • Proficiency integrating automated testing, security scanning, and monitoring within CI/CD processes
  • Strong scripting skills in languages such as Python, Bash, or Go
  • Experience developing internal tools to streamline processes and increase productivity
  • Experience managing and scaling large databases in a production environment
  • Excellent collaboration skills, with the ability to work closely with the customer’s product, data, and security teams
  • Source
    Remotive Remote Jobs RSS Feed

    Comments are closed.