Staff DevOps Engineer- Big Data - Federal position requires 6+ years of hands-on experience with Kubernetes, deep understanding of Kubernetes architecture, concepts, and operations, and strong knowledge in querying and analyzing large-scale data using VictoriaMetrics, Prometheus, Spark, Flink, and Grafana.
Requirements
- Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving
- 6+ years of hands-on experience with Kubernetes in a production environment
- Deep understanding of Kubernetes architecture, concepts, and operations
- Deep understanding of Hadoop/Big Data Ecosystem
- Strong knowledge in querying and analyzing large-scale data using VictoriaMetrics, Prometheus, Spark, Flink, and Grafana
- Experience working with systems such as HDFS, Yarn, Hive, HBase, Kafka, RabbitMQ, Impala, Kudu, Redis, MariaDB, and PostgreSQL
- Experience supporting CI/CD pipelines for automated applications deployment to Kubernetes
- Strong Linux Systems Administration skills
- Strong scripting skills in Bash, Python for automation and task management
- Proficient with Git and version control systems
- Familiarity with Cloudera Data Platform (CDP) and its ecosystem
- Ability to learn quickly in a fast-paced, dynamic team environment
Benefits
- 401(k) Plan with company match
- ESPP
- Flexible time away plan
- Family leave programs
- Flexible spending accounts