Data Engineer
Capital One
(Plano, Texas)Capital One Financial Corporation, incorporated in July 21, 1994, is a diversified banking company focused primarily on consumer and commercial lending and deposit origination. Its principal business segments are Local Banking and National Lending.
The Role:
We are looking for driven individuals to join our team of passionate data engineers in creating Capital One’s next generation of data products and capabilities.
- You will build data pipeline frameworks to automate high-volume and real-time data delivery for our Hadoop and streaming data hub
- You will build data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners
- You will transform complex analytical models into scalable, production-ready solutions
- You will continuously integrate and ship code into our on premise and cloud Production environments
- You will develop applications from ground up using a modern technology stack such as Scala, Spark, Postgres, Angular JS, and NoSQL
- You will work directly with Product Owners and customers to deliver data products in a collaborative and agile environment
Responsibilities:
- Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business Customers
- Ability to grasp new technologies rapidly as needed to progress varied initiatives
- Break down data issues and resolve them
- Build robust systems with an eye on the long term maintenance and support of the application
- Leverage reusable code modules to solve problems across the team and organization
- Utilize a working knowledge of multiple development languages
Basic Qualifications:
- Bachelor’s Degree or military experience
- At least 2 years in coding in data management, data warehousing or unstructured data environments
- At least 2 years experience working with leading big data technologies like Cassandra, Accumulo, HBase, Spark, Hadoop, HDFS, AVRO, MongoDB, Zookeeper
Preferred Qualifications:
- Master's Degree
- 2+ years experience with Agile engineering practices
- 2+ years in-depth experience with the Hadoop stack (MapReduce, Pig, Hive, Hbase)
- 2+ years experience with NoSQL implementation (Mongo, Cassandra, etc. a plus)
- 2+ years experience developing Java based software solutions
- 2+ years experience in at least one scripting language (Python, Perl, JavaScript, Shell)
- 2+ years experience developing software solutions to solve complex business problems
- 2+ years experience with Relational Database Systems and SQL
- 2+ years experience designing, developing, and implementing ETL
- 2+ years experience with UNIX/Linux including basic commands and shell scripting.
Additional Notes on Compensation
Generous salary and merit-based pay incentives
Questions
There are no answered questions, sign up or login to ask a question
- APIs
- Coding Data
- Data Management
- Hadoop
- HBase
- Java
- JavaScript
- Linux
- Perl
- Promoting Use of Agile Engineering Practices
- Python
- Relational Databases
- Scala
- Unix
- AngularJS
- Apache Avro
- Apache Cassandra
- Apache Hive
- Apache Spark
- Data Warehousing
- MapReduce
- MongoDB
- NoSQL
- Pig
- PostgreSQL Programming
- ETL
- Accumulo
- Apache Zookeeper
- Hadoop Distributed File System
- Unstructured Data
- Software Solutions
- Shell

Want to see jobs that are matched to you?
DreamHire recommends you jobs that fit your
skills, experiences, career goals, and more.