Cequence Security protects consumer facing web applications and APIs at large enterprises from a variety of threats. Cequence Application Security Platform (ASP) provides run-time visibility, cataloging and risk assessment of the application fabric and protects them business logic attacks and exploits targeting application vulnerabilities. These attacks cost enterprises millions of dollars in fraud, loss of revenue, brand damage, etc. Cequence’s differentiated approach to application security requires no application integration and can be deployed very quickly within customer premises as well as consumed by customers as a SaaS service. Various Fortune 500 companies in finance, banking, retail, social media and travel/hospitality industries are protecting their revenue generating applications, using Cequence Security.
Data Engineer Position Overview
As a Data Engineer at Cequence Security, you will be responsible for developing and enhancing the various Real Time Data flow pipelines as well as enabling sophisticated Data Analysis from the data at rest in multiple data lakes, while also maintaining strict high performance and throughput requirements. You will also work closely with other Data Engineers, Data Scientists and Security experts to bring new ideas in Data Exploration, Analytics and Machine Learning to fruition as product features that will enable new ways of catching malicious actors and help protect our customers from various forms of exploits and abuse.
There are multiple openings for the Data Engineer in both our Sunnyvale, CA headquarters and our Cincinnati, OH development center.
- Build and enhance an optimal real time data pipeline architecture using technologies such as Spark Streaming, Kafka Streams, Kafka Messaging, Elasticsearch and other Big Data technologies.
- Identify, design, and implement improvements in the data pipelines to achieve ever higher throughput and scalability.
- Work with data scientists and security experts to strive for greater functionality in our core products.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work within an Agile workflow to organize tasks and collaborate with other team members (Jira).
- Work in a Test-Driven Development environment focused on producing reliable, well-documented production code.
- Bachelor’s degree or equivalent experience in Computer Science, or another relevant field.
- Expert level experience with programming languages Java/Scala/Kotlin etc.
- Minimum 4 years of experience in building and optimizing ‘Big Data’ data pipelines, architectures and data sets.
- Experience with message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience with big data tools: Spark, Kafka, Elasticsearch, Hadoop etc.
- Experience with stream-processing systems: Flink, Spark-Streaming, Kafka Streams etc.
- Experience with Cloud services such as AWS EC2, EMR, EKS etc is a plus.
- Experience with working in Docker and Kubernetes is a plus.
Come talk with us if you’re looking to make a difference and work at a fast-paced, fun, and rewarding environment. It’s the best career decision that you can make!