**** Direct Client Requirement****
Title: AWS Cloud Data Engineer with PySpark
Location: Carlsbad, CA
Rate: DOE /If your experience and skills match call us immediately for submission
Duration: 12+ Months
Interview Type: Skype or Phone
Work Status: Successful applicants must be legally authorized to work in the U.S.
Job Type: W2,
Experience: 7+ YEARS
- 7 to 9 years working experience in data integration and pipeline development with data warehousing .
- Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
- Strong real-life experience in python development especially in pySpark in AWS Cloud environment.
- Design, develop test, deploy, maintain and improve data integration pipeline.
- Experience in Python and common python libraries.
- Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
- Strong experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools.
- Experience with continuous deployment(CI/CD)
- Databricks, Airflow and Apache Spark Experience is a plus.
- Experience with databases (PostgreSQL, Redshift, MySQL, or similar)
- Exposure to ETL tools including Informatica and any other .
- BS/MS degree in CS, CE or EE.
Apply here or Please send to email@example.com
Position Keywords: Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB,Data Warehousing,pySpark in AWS Cloud.
Pay Rate: DOE/If your experience and skills match call us immediately for submission
Job Duration: 12 Months
% Travel Required: None
Job Posted by: Consulting Services
Job ID: OOJ - 2171
Work Authorization: Successful applicants must be legally authorized to work in the U.S.