Big Data Engineer Consultant

Accenture

(San Francisco, California)
Full Time Travel Required
Job Posting Details
About Accenture

Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations.

Summary

The Big Data Engineer Consultant empowers clients to turn information into action by gathering, analyzing and modeling client data which enables smarter decision making. Uses a broad set of analytical tools and techniques to develop quantitative and qualitative business insights. Works with partners as necessary to integrate systems and data quickly and effectively, regardless of technical challenges or business environments.

Responsibilities
  • Data Engineers at the Consultant level will be responsible for architecture, design and implementation of Hadoop and NoSQL based full scale solutions that includes data acquisition, storage, transformation, security, data management and data analysis using these technologies.
  • A solid understanding of infrastructure planning, scaling, design and operational considerations that are unique to Hadoop, NoSQL and other emerging data technologies is required.
  • We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to identify and apply Hadoop and NoSQL solutions to challenges with data and provide better data solutions to industries.
Ideal Candidate
  • We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to identify and apply Hadoop and NoSQL solutions to challenges with data and provide better data solutions to industries.

  • Minimum 2 years designing and implementing relational data models working with RDBMS move to preferred

  • Minimum 2 years working with traditional as well as Big Data ETL tools move to preferred
  • Minimum 2 years of experience designing and building REST web services move to preferred
  • Designing and building statistical analysis models, machine learning models, other analytical modeling using these technologies on large data sets (e.g. R, MLib, Mahout, Spark, GraphX) move to preferred
  • Minimum 1 year of experience implementing large scale cloud data solutions using AWS data services e.g. EMR, Redshift move to preferred
  • 2+ years of hands-on experience designing, implementing and operationalizing production data solutions using emerging technologies such as Hadoop Ecosystem (MapReduce, Hive, HBase, Spark, Sqoop, Flume, Pig, Kafka etc.), NoSQL(e.g. Cassandra, MongoDB), In-Memory Data Technologies, Data Munging Technologies.
  • Architecting large scale Hadoop/NoSQL operational environments for production deployments
  • Designing and Building different data access patterns from Hadoop/NoSQL data stores
  • Managing and Modeling data using Hadoop and NoSQL data stores
  • Metadata management with Hadoop and NoSQL data in a hybrid environment
  • Experience with data munging / data wrangling tools and technologies

Basic Qualifications

  • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience
  • Minimum 2+ years of building and deploying applications java applications in a Linux/Unix environment.
  • Minimum of 1+ years designing and building large scale data loading, manipulation, processing, analysis, blending and exploration solutions using Hadoop/NoSQL technologies (e.g. HDFS, Hive, Sqoop, Flume, Spark, Kafka, HBase, Cassandra, MongoDB etc.)
  • Minimum 1+ years of architecting and organizing data at scale for a Hadoop/NoSQL data stores
  • Minimum 1+ years of coding with MapReduce Java, Spark, Pig, Hadoop Streaming, HiveQL, Perl/Python/PHP for data analysis of production Hadoop/NoSQL applications

Questions

There are no answered questions, sign up or login to ask a question

sign up or login to save this job and more
San Francisco, California
Skills Desired
Sign up or login to see how your skills match up.
  • AWS
  • Cloud
  • Data Wrangling
  • Hadoop
  • HBase
  • Implementing
  • Java
  • Linux
  • Machine Learning Models
  • Organizing
  • Perl
  • PHP
  • Programming
  • Python
  • Statistical Analysis
  • Unix
  • Apache Cassandra
  • Apache Flume
  • Apache Hive
  • Apache Kafka
  • Apache Spark
  • MapReduce
  • MongoDB
  • NoSQL
  • Pig
  • R
  • REST
  • Sqoop
  • Computer Science
  • Redshift
  • RDBMS
  • Hadoop Streaming
  • engineering
  • ETL tools
  • Metadata Management Tools
  • Ecosystem
  • Bachelor’s Degree

Want to see jobs that are matched to you?

DreamHire recommends you jobs that fit your
skills, experiences, career goals, and more.